Should GPUs Be Preheated Prior to Benchmarking?

AMD Radeon Logo 2013With heat sensitive technologies like Nvidia’s GPU Boost 2.0 and AMD’s PowerTune modern graphics cards run at variable clock rates with an eye on containing power consumption, limiting heat, protecting the chip from damage, and boosting performance when possible. This means that although a graphics card may have a nominal clock speed specification the majority of the time that GPU will not be running at that clock speed, but rather in a range of clock speeds around that point.

With older GPUs there were safety limits for temperature after which point the GPU would drop into its idle state to avoid damaging its self. In extreme cases like a heatwave some GPUs would hit these thermal limits faster than under normal weather conditions leading to lower than expected performance. The flip side of this was that you could in some scenarios get better or more consistent performance out of your GPU by dragging your PC outside on a cold night.

But those days are over. Where as five years ago testing for the impact of ambient temperatures on GPU performance might have been a good idea for an episode of Mythbusters we are now faced with the reality that in the race to squeeze as much performance from modern GPUs as possible we’ve drastically increased their sensitivity to environmental conditions.

The obvious example of GPU performance varying significantly due to differing thermal conditions is the recently issues surrounding the AMD R9 series GPUs. Due to the way that AMD was controlling the speed of the fans their GPUs not all of the R9 GPUs were running at the right fan speed. The impact of this was that the press sample GPUs, which were part of a different production run than the retail GPUs, ran at higher fan speeds than the parts that consumers could buy giving press samples a higher cooling capacity than their retail counterparts. Thus the performance of the GPUs sampled to the press was sometimes 20 percent higher than what you could achieve with a retail part.

Of course AMD was able to remedy this issue in a driver update, but this is a great example of one of the adverse effects of the hypersensitive GPU tuning schemes that essentially all major GPU designers have implemented.

When Nvidia’s first Kepler GPU arrived in the form of the GTX 680 with its variable boost clock there was some contention about how to account for that level of variability when reviewing GPUs with GPU Boost. A few reviewers started preheating their GPUs by letting the benchmarks run for a bit beforehand to avoid any favorable impact that a cold start might have on a GPU that can take advantage of that extra initial thermal headroom to perform better than they would have otherwise.

I actually quite like this idea for a few reasons. First off preheating GPUs allows the benchmarks to be a better representation of a real-world scenario because most users don’t just play a game for sixty seconds and then proceed to switch to a dozen different games over the next hour. Rather most gamers will play for a period of between half an hour and 6 hours. Thus preheating a GPU will allow our benchmarks to better represent the kind of performance a consumer should expect during their entire gaming session as opposed to just the very beginning of their session.

But another, more important reason for reviewers to preheat their GPUs is boost variance. Cold GPUs will reach their boost clocks for longer periods of time than GPUs at their nominal operating temperature. By preheating GPUs to their nominal operating temperature prior to benchmarking them we can avoid skewing the results of the benchmarks. The cost of this process though is time, which is a luxury that reviewers often only have a short supply of. Intel is perhaps the only company that gives reviewers an adequate amount of time to review their products prior to launch, although sometimes not even that statement holds true.

In any case we needed to figure out what the impact of preheating GPUs before benchmarking them was. So we benchmarked an R9 290X in Battlefield 3 at the game’s maximum settings on a 1080P monitor with AMD’s 13.11 Beta 9.2 driver after letting the game run for various amounts of time prior to actually recording the benchmark. We also did one cold start where we began the benchmark as soon as we could after booting the PC and another where we used Furmark to get the R9 290X up to a stable 95 degree Celsius operating temperature which coincidentally took five minutes.

PreheatingTheR9290X

The advantage of using Battlefield 3 for this test is that the R9 290X maintains a pretty high frame-rate allowing us to pickup on any fluctuations however minor. However if we take the highest frame-rate here and compare it to the lowest frame-rate there is still only a two percent difference between them.

Actually, if we exclude the results from the ten minute preheating run you could even argue that preheating the R9 290X for any length of time improves performance by about one percent. I have a theory about why this difference exists, but however you want to slice it preheating the GPU prior to benchmarking appears to make little difference, at least on the R9 290X.

Despite all of this I’m still a big fan of GPU preheating because it’s more representative of the experience that a consumer will encounter. Yeah, it’s a bit of pain to do and it takes longer to run the benchmarks but I think it’s a trade off worth making. Plus it’s getting colder these days and I’ve been looking for a svelte space heater.S|A

The following two tabs change content below.
Thomas Ryan is a freelance technology writer and photographer from Seattle, living in Austin. You can also find his work on SemiAccurate and PCWorld. He has a BA in Geography from the University of Washington with a minor in Urban Design and Planning and specializes in geospatial data science. If you have a hardware performance question or an interesting data set Thomas has you covered.