GPU Boost 2.0: Overclocking & Overclocking Your Monitor

The first half of the GPU Boost 2 story is of course the fact that with 2.0 NVIDIA is switching from power based controls to temperature based controls. However there is also a second story here, and that is the impact to overclocking.

With the GTX 680, overclocking capabilities were limited, particularly in comparison to the GeForce 500 series. The GTX 680 could have its power target raised (guaranteed “overclocking”), and further overclocking could be achieved by using clock offsets. But perhaps most importantly, voltage control was forbidden, with NVIDIA going so far as to nix EVGA and MSI’s voltage adjustable products after a short time on the market.

There are a number of reasons for this, and hopefully one day soon we’ll be able to get into NVIDIA’s Project Greenlight video card approval process in significant detail so that we can better explain this, but the primary concern was that without strict voltage limits some of the more excessive users may blow out their cards with voltages set too high. And while the responsibility for this ultimately falls to the user, and in some cases the manufacturer of their card (depending on the warranty), it makes NVIDIA look bad regardless. The end result being that voltage control on the GTX 680 (and lower cards) was disabled for everyone, regardless of what a card was capable of.

With Titan this has finally changed, at least to some degree. In short, NVIDIA is bringing back overvoltage control, albeit in a more limited fashion.

For Titan cards, partners will have the final say in whether they wish to allow overvolting or not. If they choose to allow it, they get to set a maximum voltage (Vmax) figure in their VBIOS. The user in turn is allowed to increase their voltage beyond NVIDIA’s default reliability voltage limit (Vrel) up to Vmax. As part of the process however users have to acknowledge that increasing their voltage beyond Vrel puts their card at risk and may reduce the lifetime of the card. Only once that’s acknowledged will users be able to increase their voltages beyond Vrel.

With that in mind, beyond overvolting overclocking has also changed in some subtler ways. Memory and core offsets are still in place, but with the switch from power based monitoring to temperature based monitoring, the power target slider has been augmented with a separate temperature target slider.

The power target slider is still responsible for controlling the TDP as before, but with the ability to prioritize temperatures over power consumption it appears to be somewhat redundant (or at least unnecessary) for more significant overclocking. That leaves us with the temperature slider, which is really a control for two functions.

First and foremost of course is that the temperature slider controls what the target temperature is for Titan. By default for Titan this is 80C, and it may be turned all the way up to 95C. The higher the temperature setting the more frequently Titan can reach its highest boost bins, in essence making this a weaker form of overclocking just like the power target adjustment was on GTX 680.

The second function controlled by the temperature slider is the fan curve, which for all practical purposes follows the temperature slider. With modern video cards ramping up their fan speeds rather quickly once cards get into the 80C range, merely increasing the power target alone wouldn’t be particularly desirable in most cases due to the extra noise it generates, so NVIDIA tied in the fan curve to the temperature slider. By doing so it ensures that fan speeds stay relatively low until they start exceeding the temperature target. This seems a bit counterintuitive at first, but when put in perspective of the goal – higher temperatures without an increase in fan speed – this starts to make sense.

Finally, in what can only be described as a love letter to the boys over at 120hz.net, NVIDIA is also introducing a simplified monitor overclocking option, which can be used to increase the refresh rate sent to a monitor in order to coerce it into operating at that higher refresh rate. Notably, this isn’t anything that couldn’t be done before with some careful manipulation of the GeForce control panel’s custom resolution option, but with the monitor overclocking option exposed in PrecisionX and other utilities, monitor overclocking has been reduced to a simple slider rather than a complex mix of timings and pixel counts.

Though this feature can technically work with any monitor, it’s primarily geared towards monitors such as the various Korean LG-based 2560x1440 monitors that have hit the market in the past year, a number of which have come with electronics capable of operating far in excess of the 60Hz that is standard for those monitors. On the models that have been able to handle it, modders have been able to get some of these 2560x1440 monitors up to and above 120Hz, essentially doubling their native 60Hz refresh rate to 120Hz, greatly improving smoothness to levels similar to a native 120Hz TN panel, but without the resolution and quality drawbacks inherent to those TN products.

Of course it goes without saying that just like any other form of overclocking, monitor overclocking can be dangerous and risks breaking the monitor. On that note, out of our monitor collection we were able to get our Samsung 305T up to 75Hz, but whether that’s due to the panel or the driving electronics it didn’t seem to have any impact on performance, smoothness, or responsiveness. This is truly a “your mileage may vary” situation.

GPU Boost 2.0: Temperature Based Boosting Origin’s Genesis: Titan on Water & More to Come
Comments Locked

157 Comments

View All Comments

  • tipoo - Tuesday, February 19, 2013 - link

    It seems if you were targetting maximum performance, being able to decouple them would make sense, as the GPU would both have higher thermal headroom as well as run cooler on average with the fan working harder, thus letting it hit the boost clocks higher.
  • Ryan Smith - Tuesday, February 19, 2013 - link

    You can always manually adjust the fan curve. NVIDIA is simply moving it with the temperature target by default.
  • Golgatha - Tuesday, February 19, 2013 - link

    WTF nVidia!? Seriously, WTF!?

    $1000 for a video card. Are they out of the GD minds!?
  • imaheadcase - Tuesday, February 19, 2013 - link

    No, read the article you twat.
  • tipoo - Tuesday, February 19, 2013 - link

    If they released a ten thousand dollar card, what difference would it make to you? This isn't' exactly their offering for mainstream gamers.
  • jackstar7 - Tuesday, February 19, 2013 - link

    I understand that my setup is a small minority, but I have to agree with the review about the port configuration. Not moving to multi-mDP on a card of this level just seems wasteful. As long as we're stuck with DVI, we're stuck with bandwidth limits that are going to stand in the way of 120Hz for higher resolutions (as seen on the Overlords and Catleap Extremes). Now I have to hope for some AIB to experiment with a $1000 card, or more likely wait for AMD to catch up to this.
  • akg102 - Tuesday, February 19, 2013 - link

    I'm glad Ryan got to experience this Nvidia circle jerk 'first-hand.'
  • Arakageeta - Tuesday, February 19, 2013 - link

    The Tesla- and Quadro-line GPUs have two DMA copy engines. This allows the GPU to simultaneously send and receive data on the full-duplex PCIe bus. However, the GeForce GPUs traditionally have only one DMA copy engine. Does the Titan have one or two copy engines? Since Titan has Tesla-class DP, I thought it might also have two copy engines.

    You can run the "deviceQuery" command that is a part of the CUDA SDK to find out.
  • Ryan Smith - Tuesday, February 19, 2013 - link

    1 copy engine. The full output of DeviceQuery is below.

    CUDA Device Query (Runtime API) version (CUDART static linking)

    Detected 1 CUDA Capable device(s)

    Device 0: "GeForce GTX TITAN"
    CUDA Driver Version / Runtime Version 5.0 / 5.0
    CUDA Capability Major/Minor version number: 3.5
    Total amount of global memory: 6144 MBytes (6442123264 bytes)
    (14) Multiprocessors x (192) CUDA Cores/MP: 2688 CUDA Cores
    GPU Clock rate: 876 MHz (0.88 GHz)
    Memory Clock rate: 3004 Mhz
    Memory Bus Width: 384-bit
    L2 Cache Size: 1572864 bytes
    Max Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536,65536), 3
    D=(4096,4096,4096)
    Max Layered Texture Size (dim) x layers 1D=(16384) x 2048, 2D=(16384,16
    384) x 2048
    Total amount of constant memory: 65536 bytes
    Total amount of shared memory per block: 49152 bytes
    Total number of registers available per block: 65536
    Warp size: 32
    Maximum number of threads per multiprocessor: 2048
    Maximum number of threads per block: 1024
    Maximum sizes of each dimension of a block: 1024 x 1024 x 64
    Maximum sizes of each dimension of a grid: 2147483647 x 65535 x 65535
    Maximum memory pitch: 2147483647 bytes
    Texture alignment: 512 bytes
    Concurrent copy and kernel execution: Yes with 1 copy engine(s)
    Run time limit on kernels: Yes
    Integrated GPU sharing Host Memory: No
    Support host page-locked memory mapping: Yes
    Alignment requirement for Surfaces: Yes
    Device has ECC support: Disabled
    CUDA Device Driver Mode (TCC or WDDM): WDDM (Windows Display Driver Mo
    del)
    Device supports Unified Addressing (UVA): Yes
    Device PCI Bus ID / PCI location ID: 3 / 0
    Compute Mode:
    < Default (multiple host threads can use ::cudaSetDevice() with device simu
    ltaneously) >

    deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 5.0, CUDA Runtime Versi
    on = 5.0, NumDevs = 1, Device0 = GeForce GTX TITAN
  • tjhb - Tuesday, February 19, 2013 - link

    Thank you!

    It seems to me NVIDIA are being incredibly generous to CUDA programmers with this card. I can hardly believe they've left FP64 capability at the full 1/3. (The ability to switch between 1/24 at a high clock and 1/3 at reduced clock seems ideal.) And we get 14/15 SMXs (a nice round number).

    Do you know whether the TCC driver can be installed for this card?

Log in

Don't have an account? Sign up now