A Brief History of Switchable Graphics

So what is Optimus exactly? You could call it NVIDIA Switchable Graphics III, but that makes it sound like only a minor change. After our introduction, in some ways the actual Optimus technology is a bit of a letdown. This is not to say that Optimus is bad—far from it—but the technology only addresses a subset of our wish list. The area Optimus does address—specifically switching times—is dealt with in what can only be described as an ideal fashion. The switch between IGP and discrete graphics is essentially instantaneous and transparent to the end-user, and it's so slick that it becomes virtually a must-have feature for any laptop with a discrete GPU. We'll talk about how Optimus works in a minute, but first let's discuss how we got to our present state of affairs.

It turns out that the Optimus hardware has been ready for a while, but NVIDIA has been working hard on the software side of things. On the hardware front, all of the current G200M and G300M 40nm GPUs have the necessary internal hardware to support Optimus. Earlier laptop designs using those GPUs don't actually support the technology, but the potential was at least there. The software wasn't quite ready, as it appears to be quite complex—NVIDIA says that the GeForce driver base now has more lines of code than Windows NT, for example. NVIDIA was also keen to point out that they have more software engineers (over 1000) than hardware engineers, and some of those software engineers are housed in partner offices (e.g. there are NVIDIA employees working over at Adobe, helping with the Flash 10.1 software). Anyway, the Optimus hardware and software are both ready for public release, and you should be able to find first Optimus enabled laptops for sale starting today.


The original switchable graphics designs used a hardware switch to allow users to select either an IGP or discrete GPU. The first such system that we tested was the ASUS N10JC, but the very first implementation came from Sony in the form of the VAIO SZ-110B launched way back in April 2006. (The Alienware m15x was also gen1 hardware.) Generation one required a system reboot in order to switch between graphics adapters, with hardware multiplexers routing power to the appropriate GPU and more multiplexers to route the video signal from either GPU to the various display outputs. On the surface, the idea is pretty straightforward, but the actual implementation is much more involved. A typical laptop will have three separate video devices: the laptop LCD, a VGA port, and a DVI/HDMI port. Adding the necessary hardware requires six (possibly more) multiplexer ICs at a cost of around $1 each, plus more layers on the motherboard to route all of the signals. In short, it was expensive, and what's worse the required system reboot was highly disruptive.

Many users of the original switchable graphics laptops seldom switched, opting instead to use either the IGP or GPU all the time. I still liked the hardware and practically begged manufacturers to include switchable graphics in all future laptop designs. My wish wasn't granted, although given the cost it's not difficult to see why. As far as the rebooting, my personal take is that it was pretty easy to simply switch your laptop into IGP mode right before spending a day on the road. If you were going to spend most of the day seated at your desk, you'd switch to discrete mode. The problem is, as one of the highly technical folks on the planet, I'm not a good representation of a typical user. Try explaining to a Best Buy shopper exactly what switchable graphics is and how it works, and you're likely to cause more confusion than anything. Our readers grasp the concept, but the added cost for a feature many wouldn't use meant there was limited uptake.


Generation two involved a lot more work on the software side, as the hardware switch became a software controlled switch. NVIDIA also managed to eliminate the required system reboot (although certain laptop vendors continue to require a reboot or at least a logout, e.g. Apple). Again, that makes it sound relatively simple, but there are many hurdles to overcome. Now the operating system has to be able to manage two different sets of drivers, but Windows Vista in particular doesn't allow multiple display drivers to be active. The solution was to create a "Display Driver Interposer" that had knowledge of both driver sets. Launched in 2008, the first laptop we've reviewed with gen2 hardware and software was actually the ASUS UL80Vt, which took everything we loved about gen1 and made it a lot more useful. Now there was no need to reboot the system; you could switch between IGP and dGPU in about 5 to 10 seconds, theoretically allowing you the best of both worlds. We really liked the UL80Vt and gave it our Silver Editors' Choice award, but there was still room for improvement.

First, the use of a driver interposer meant that generic Verde drivers would not work with switchable graphics gen2. The interposer conforms to the standard graphics APIs, but then there's a custom API to talk to the IGP drivers. The result is that the display driver package contains both NVIDIA and Intel drivers (assuming it's a laptop with an Intel IGP), so driver updates are far more limited. If either NVIDIA or Intel release a new driver, there's an extra ~10 days of validation and testing that take place; if all goes well, the new driver is released, but any bugs reset the clock. Of course, that's only in a best-case situation where NVIDIA and Intel driver releases happen at the same time, which they rarely do. In practice, the only time you're likely to get a new driver is if there's a showstopper bug of some form and the laptop OEM asks NVIDIA for a new driver drop. This pretty much takes you back to the old way of doing mobile graphics drivers, which is not something we're fond of.

Another problem with gen2 is that there are still instances where switching from IGP to dGPU or vice versa will "block". Blocking occurs when an application is in memory that currently uses the graphics system. If blocking was limited to 3D games it wouldn't be a critical problem, but the fact is blocking can occur on many applications—including minesweeper and solitaire, web browsers where you're watching (or have watched) Flash videos, etc. If a blocking application is active, you need to close it in order to switch. The switch also results in a black screen as the hardware shifts from one graphics device to the other, which looks like potentially flaky hardware if you're not expecting the behavior. (This appears to be why Apple MacBook Pro systems require a reboot/logout to switch, even though they're technically gen2 hardware.) Finally, it's important to note that gen2 costs just as much as gen1 in terms of muxes and board layers, so it can still increase BOM and R&D costs substantially.

Incidentally, AMD switchable graphics is essentially equivalent to NVIDIA's generation two implementation. The HP Envy 13 is an example of ATI switchable graphics, with no reboot required and about 5-10 seconds required to switch between IGP and discrete graphics (an HD 4330 in this case—and why is it they always seem to use the slowest GPUs; why no HD 4670?).

For technically inclined users, gen2 was a big step forward and the above problems aren't a big deal; for your typical Best Buy shopper, though, it's a different story. NVIDIA showed a quote from Roger Kay, President of Endpoint Technology Associates, that highlights the problem for such users.

"Switchable graphics is a great idea in theory, but in practice people rarely switch. The process is just too cumbersome and confusing. Some buyers wonder why their performance is so poor when they think the discrete GPU is active, but, unknown to them, it isn't."

The research from NVIDIA indicates that only 1% of users ever switched between IGP and dGPU, which frankly seems far too low. Personally, if a laptop is plugged in then there's no real reason to switch off the discrete graphics, and if you're running on battery power there's little reason to enable the discrete graphics most of the time. It could be that only 1% of users actually recognize that there's a switch taking place when they unplug their laptop; it could also be that MacBook Pro users with switchable graphics represented a large percentage of the surveyed users. Personally, I am quite happy with gen2 and my only complaint is that not enough companies use the technology in their laptops.

Index NVIDIA Optimus Unveiled
Comments Locked

49 Comments

View All Comments

  • jfmeister - Tuesday, February 9, 2010 - link

    I was anxious to get an mx11 but 2 things were bothering me:
    1- No DirectX 11 compatibility
    2- No Core i5/i7 platform.

    Now there is another reason to wait for the refresh. But with arrendale prices droping, DX11 card available, Optimus, I would expect Alienware to get on the badwagon fast for a new mx11 platform and not wait 6 to 8 months for a refresh. This ultra laptop is intended for gamers and we all know that gamers are on top of their things. Optimus in the mx11 case should be a must.

    BTW, what I find funny is Optimus looks like a revolution, but what about 3dfx 10 years ago with their 3D Card addon (Monster 3D 8MB ftw)? Swithcing was used back then... This looks like the same thing except with HD video support! It took that long to come up with that?
  • JarredWalton - Tuesday, February 9, 2010 - link

    Remember that the switching back in the days of 3dfx was just in software and that the 3D GPU was always powered. There was the dongle cable situation as well. So the big deal here isn't just switching to a different GPU, but doing it on-the-fly and powering the GPU on/off virtually instantly. We think this will eventually make its way into desktops, but obviously it's a lot more important for laptops.
  • StriderGT - Tuesday, February 9, 2010 - link

    My take on Optimus:

    Optimus roots lie with hybrid SLI.
    Back then it was advertised as an nvidia only chipset feature (nvidia IGP + nvidia GPU) for both desktop and notebooks.

    Currently nvidia is being rapidly phased out of PC x86 chipsets so optimus is the only way to at least put an nvidia GPU on an intel IGP based system, but:

    1. Only real benefit is gaming performance without sacrificing autonomy in notebooks.
    2. Higher cost (in the form of the discrete GPU), intel has 60%+ of GPUs(=IGPs) because the vast majority do not care or are uninformed about game performance scaling.
    3. CUDA/Physx currently and in the foreseeable future irrelevant for mobile applications (gaming is much more relevant in comparison).
    4. Video decoding capabilities already present in most current IGPs (except pinetrail netbooks which can acquire it with a cheaper dedicated chip )
    5. Netbooks will not benefit from Optimus because they lack the CPU horsepower to feed the discrete GPU and are very cost sensitive... (same reason that ION1/2 is not the primary choice for netbook builders)
    6. In the desktop space only some niche small form factor PC applications could benefit from such a technology eg an SFF PC would need lesser cooling/noise during (IGP) normal operation and become louder more powerful while gaming (GPU)
    7. Idling/2D power consumption of most modern desktop GPUs is so low making the added complexity of a simultaneously working onboard IGP and the associated software a no benefit approach.
    8. Driver/application software problems that might arise from the complexity of profiles and the vastly different workload application scenarios.

    So in the end it boils down how can nvidia convince the world that a discrete GPU and its added cost is necessary in every portable (netbook and upwards sized) device out there. As for the desktop side it will be even more difficult to push such a thing with only noise reduction in small form factor PCs being of interest.

    BTW At least now the manufacturers won't have anymore excuses for the lack of descent GPU inside some of the cheaper notebook models (500-1000$), because of battery autonomy reasons.
    Oh well I'll keep my hopes low after so much time being a niche market since they might find some other excuse along the lines weight and space required for cooling the GPU during A/C operation... :-(

    PS Initially posted on yahoo finance forum
  • Zoomer - Tuesday, February 9, 2010 - link

    Not like it was really necessary; the Voodoo 2 used maybe 25W (probably less) and was meant for a desktop use.
  • jfmeister - Tuesday, February 9, 2010 - link

    Good point! I guess I did not take the time to think about it. I was more into the concept than the whole techincal side of that you brought up.

    Thanks!

    JF
  • cknobman - Tuesday, February 9, 2010 - link

    Man mx11 was biggest disappointment out there. weak sauce last gen processor on a so called premium high end gaming brand? Ill consider it once they get an arrandale culv and optima cause right now looking at notebookreview.com forums it is a manual switching graphics not optima.
  • crimson117 - Tuesday, February 9, 2010 - link

    Which processor should they have used, in your opinion?
  • cknobman - Tuesday, February 9, 2010 - link

    Should have waited another month to market and used the Core i7 ulv processors. There are already a few vendors using this proc (panasonic is one).
  • Wolfpup - Tuesday, April 20, 2010 - link

    Optimus is impressive software, but personally I don't want it, ever. I don't want Intel graphics on my CPU. I don't want Intel graphics in my memory controller. I don't want Intel graphics. I want my real GPU to by my real GPU, not a helper device that renders something that gets copied over to Intel's graphics.

    I just do not want this. I don't like having to rely on profiles either-thankfully you can manually add programs, but still.

Log in

Don't have an account? Sign up now