This afternoon NVIDIA announced their plans for a public “GeForce Gaming Celebration” later this month, taking place amidst the Gamescom expo in Cologne, Germany. Promising talk of games and “spectacular surprises,” this marks the first real GeForce-branded event that NVIDIA has held this year, and just over two years since they’ve held such a large event opposite a major gaming expo.

The world’s biggest gaming expo, Gamescom 2018, runs August 21-25 in Cologne, Germany. And GeForce will loom large there -- at our Gamescom booth, Hall 10.1, Booth E-072; at our partners’ booths, powering the latest PC games; and at our own off-site GeForce Gaming Celebration that starts the day before [August 20th].

The event will be loaded with new, exclusive, hands-on demos of the hottest upcoming games, stage presentations from the world’s biggest game developers, and some spectacular surprises.

The timing of the event along with the vague description of what it’s about is sure to drive speculation about what exactly NVIDIA will have to show off, especially as we're approaching the end of NVIDIA's usual 24 - 30 month consumer product cycle. Their 2016 event opposite Dreamhack was of course the reveal of the GeForce 10 Series. And the date of this year’s event – August 20th – happens to be same day as the now redacted/canceled NVIDIA Hot Chips presentation about “NVIDIA’s Next Generation Mainstream GPU.”

For what it’s worth, NVIDIA’s 2016 teaser didn’t say anything about the event in advance – it was merely an invitation to an unnamed event – whereas this teaser specifically mentions games and surprises. So make of that what you will.

Meanwhile, as noted previously, this is a public event. So NVIDIA says that there is a limited amount of space for Gamescom attendees and other locals to register and catch the event in person. Otherwise like most other NVIDIA events, this event will be live streamed for the rest of the world, with the event kicking off at 6pm CET.

Source: NVIDIA

Comments Locked

41 Comments

View All Comments

  • MajGenRelativity - Tuesday, July 31, 2018 - link

    Yes AMD and Intel have kept the TDP down, but the performance increase has not been the same as a GPU. As for TDP growth, it's held steady for the past several years. Let's take the GTX 580 vs the GTX 1080 for example. The GTX 580 cost slightly less, but had a higher TDP (244W vs 180W for 1080). Now, compare the performance of the 580 vs the 1080. Now, let's take the i7-2600K vs the i7-8700K. Both have the same TDP, and the 2600K cost $30 less. Compare the performance of the 8700K vs the 2600K. Does the 8700K implement the same performance boost over the 2600K as the 1080 does over the 580? And the 1080 did all this while REDUCING the TDP
  • PeachNCream - Tuesday, July 31, 2018 - link

    Those examples are limited to a relatively narrow number of years. I agree that if you limit your time horizon to cover the 2010 to present day, you'd see a somewhat more favorable set of circumstances, but by 2010, dGPU manufacturers were already firmly entrenched in solving the technical hurdles associated with the attainment of greater performance by throwing more electrical power and larger heatsinks at the problem. The switch off 28nm helped with the current generation so that one off gain, though it was largely squandered increasing clocks rather than improving efficiency, helps to paint a slightly less painful picture. Unfortunately, the switch to 14/16nm really was ultimately wasted on ramping up speed while largely just holding the line on already fat TDP numbers. There's no reason why, in NVIDIA's current product stack, there only GPU that doesn't require direct power from the PSU is the lowly 1030. Don't get caught in the trap of being satisfied with 75+ watts just because it's been that way for a few years. It seems like that's a common stumbling block where the average enthusiast's "draw distance" is limited to a handful of years when glancing back at the past.
  • MajGenRelativity - Tuesday, July 31, 2018 - link

    I feel like you don't understand how efficiency works. Efficiency can be loosely defined as amount of "work" (work as in the layperson definition of the word, not the technical definition. I.E. "stuff done") per unit of energy expended, e.g. Joule. Increasing the clockspeeds, and "stuff done", for the same TDP is an increase in efficiency. 14/16nm did in fact increase efficiency. You're confusing increasing efficiency with lowering TDP. They're not in fact the same. If I made a product that performed 1/10 as well as a 1080 while having 1/9 the TDP, you might be happy, but it would actually have a lower efficiency.

    As for going only back to 2010, we can cast our eyes all the way back to the 8800 Ultra in 2007, which had a 171W TDP. Yes, we can go further back, but there really isn't a point. The electronics industry has changed tremendously since before then, and GPUs have changed a lot as well.

    Also, the GTX 1050 and 1050Ti have a 75W TDP, and can come without power connectors.
  • PeachNCream - Tuesday, July 31, 2018 - link

    That's just the thing, more "work" isn't being accomplished regardless of the improvements. Taken from the perspective of the home graphics card's purpose, entertaining the person at the keyboard, there's no more or less entertainment happening just because a modern graphics card requires more power. That's been the case, I'd argue, since DOS 6.22 was the primary PC operating system. From an abstract perspective, a game back then was just as much of an amusing time sink as a game now. TES: Arena could eat someone's free time and offer a compelling, amusing thing to do just as well as Fallout 4 can now. The wrinkles in that thinking come with the fact that a 486-class desktop ran full-tilt with a roughly 100W PSU (I owned a Packard Bell that was packing a 60W PSU) and that same output in a modern PC is pressed rather hard to supply a similarly smooth, seamless gaming experience. Therefore, while there are more transistors flipping on and off more quickly now, at least where gaming and gaming graphics are concerned, there's no additional work accomplishment for all the effort and the whole thing makes about as much sense as a bunny with a pancake on its head.
  • MajGenRelativity - Tuesday, July 31, 2018 - link

    So, more transistors flipping more quickly are magically supposed to consume less power? You're not defining work correctly at all. More calculations are happening at a greater speed, so more work is being done.
  • PeachNCream - Tuesday, July 31, 2018 - link

    The end result of those transistors flipping on and off is entertainment. That part isn't changing since the human looking at the screen or wearing the VR glasses is still getting the same benefit for the increasing input costs of power and money.
  • MajGenRelativity - Tuesday, July 31, 2018 - link

    You're saying that a game today is exactly the same level of complexity as the 1980-1990 era you keep comparing it to? There has been absolutely ZERO increase of performance and/or realism? You played 4K/60Hz games in the 1990s?
  • PeachNCream - Tuesday, July 31, 2018 - link

    I'm giving up after this. The end user gets x number of hours of entertainment regardless of the lighting effects, resolution, presence or absence of MSAA, or anything else the additional compute power offers. The end user got those hours of entertainment in 1993 and the end user gets them now in 2018. While there is certainly a difference in how things look, the outcome is identical. End user = amused.
  • MajGenRelativity - Tuesday, July 31, 2018 - link

    If we're just looking for x number of hours of entertainment, why use computers at all? Read a book, play cards, do whatever. Anyways, I enjoyed the debate. Have a pleasant day.
  • matthewsage - Tuesday, July 31, 2018 - link

    No, it's not just the "lowly 1030" that doesn't require direct power from the PSU. There are several GTX 1050 and GTX 1050 Ti graphic cards that are content with drawing power from the PCI-E slot. Both are pretty capable cards.

Log in

Don't have an account? Sign up now