This afternoon NVIDIA announced their plans for a public “GeForce Gaming Celebration” later this month, taking place amidst the Gamescom expo in Cologne, Germany. Promising talk of games and “spectacular surprises,” this marks the first real GeForce-branded event that NVIDIA has held this year, and just over two years since they’ve held such a large event opposite a major gaming expo.

The world’s biggest gaming expo, Gamescom 2018, runs August 21-25 in Cologne, Germany. And GeForce will loom large there -- at our Gamescom booth, Hall 10.1, Booth E-072; at our partners’ booths, powering the latest PC games; and at our own off-site GeForce Gaming Celebration that starts the day before [August 20th].

The event will be loaded with new, exclusive, hands-on demos of the hottest upcoming games, stage presentations from the world’s biggest game developers, and some spectacular surprises.

The timing of the event along with the vague description of what it’s about is sure to drive speculation about what exactly NVIDIA will have to show off, especially as we're approaching the end of NVIDIA's usual 24 - 30 month consumer product cycle. Their 2016 event opposite Dreamhack was of course the reveal of the GeForce 10 Series. And the date of this year’s event – August 20th – happens to be same day as the now redacted/canceled NVIDIA Hot Chips presentation about “NVIDIA’s Next Generation Mainstream GPU.”

For what it’s worth, NVIDIA’s 2016 teaser didn’t say anything about the event in advance – it was merely an invitation to an unnamed event – whereas this teaser specifically mentions games and surprises. So make of that what you will.

Meanwhile, as noted previously, this is a public event. So NVIDIA says that there is a limited amount of space for Gamescom attendees and other locals to register and catch the event in person. Otherwise like most other NVIDIA events, this event will be live streamed for the rest of the world, with the event kicking off at 6pm CET.

Source: NVIDIA

Comments Locked

41 Comments

View All Comments

  • MajGenRelativity - Tuesday, July 31, 2018 - link

    Alright. Then we should leave the high ends of both markets completely out of the discussion, and stick more to 1050/1060 level GPUs and i3-i5 level CPUs. In that case, the TDPs are far more comparable. 95W for highest powered i5 vs 120W for 1060, and 65W for i3 vs 75W for 1050Ti
  • philehidiot - Tuesday, July 31, 2018 - link

    Yeh I remember the days of single slot cards. Then double slot coolers came along and started blocking your PCI slots. Once mobo manufacturers starting shifting things around it became the accepted norm and we now even see the occasional triple slot card.

    I think the consumer is partly to blame in their ready acceptance. The whole "wow, my card is so powerful it needs a double sized cooler and two extra power cables" is almost a boast for many people as gaming hardware seems to edge towards implying excess (just look at the shape and size of "gaming" cases with their windows to show off your amazing, glow in the dark hardware). It's like a car having a giant spoiler and flared wheel arches. You might also stick a giant engine in there knowing full well it's overpowered, just for kicks.
  • PeachNCream - Tuesday, July 31, 2018 - link

    I think you're right in that we consumers ought to be shouldering much of the blame for the state that the PC enthusiast market is in today. What's produced is what sells and one article or another here on Anandtech pointed out that products with RGB offer better sales and higher margins, so we shouldn't be surprised. I've actually heard people brag about the number of power connectors and the size of the cooler on their graphics card and you're on point about it.

    I guess at this point though, unless NVIDIA's 11-series is going to move the TDP numbers somewhat lower, I just can't find a reason to feel excited about it. I'm expecting another ho-hum product cycle where the only two horses in the race offer an assortment of cards that spatter performance charts with a few improvements at a cost of too much power and too much space at cost I'm not interested in paying.
  • tarqsharq - Tuesday, July 31, 2018 - link

    Are you faulting AMD and Nvidia for "brute forcing" a rendering task that can be run in parallel at an enormous scale in order to give you fast frame rates?

    The only reason CPUs haven't followed suit is due to the logical constraints game engines and most program tasks that end users run being largely single thread limited.

    If games were easy to program to run on 16 threads and be faster because of it, we would be seeing massive CPUs with massive TDPs being highly sought after by the enthusiast gaming community, instead of being happily run on 4 core SMT enabled CPUs.
  • PeachNCream - Tuesday, July 31, 2018 - link

    We do see many-threaded games and applications, but not in the PC space yet. That's happening already in the mobile sector though. The paradigms are different as are the platform constraints, but the programmatic problems are similar enough and lots of common programs including games are taking advantage of lots of threads. The same holds true with consoles and they are the source of a lot of PC games through porting so I don't think it's a programming problem. In fact, I'm not really sure why we haven't seen greater parallelism at the CPU level in the home computing space. If I were to hazard a guess I'd say its because of the effectively "infinite" electrical energy available. We can afford to ramp single threaded performance despite the inefficiency because we don't have to worry as much about cooling and battery life isn't a concern at all.
  • MajGenRelativity - Tuesday, July 31, 2018 - link

    Have you ever tried to program a multi-threaded program? I've read in multiple articles that multi-threading is an order of magnitude difference in difficulty than single-threading. To be upfront and honest, I have had very limited experience with multi-threading, but I've seen plenty of articles on how difficult it is to do stably and performantly.
  • PeachNCream - Tuesday, July 31, 2018 - link

    I have and it's not. We have tools now that make it nearly or completely trivial depending on the language you're using to write code.
  • MajGenRelativity - Tuesday, July 31, 2018 - link

    Ok
  • CiccioB - Tuesday, July 31, 2018 - link

    You're joking.
    Multi threading is something that is completely outside the optimization capabilities of any tools for whatever language.
    It requires the human brain to think in multitasking mode (which it is not suited for) and overcome all the overheads that occurs in the meantime.
    Thinking and creating a multi threaded algorithm is not the same as simple checking a checkbox somewhere in the graphics options of the compiler or in the framework.
    You have been talking all this time about things that it is clear you do not have a clue on.
    Keep on playing with the Commodore64 which uses very little power and is way fast enough for solving your childish thoughts and problems.
  • CaedenV - Tuesday, July 31, 2018 - link

    I am not really expecting 11 series chips to come out yet, but if there are, I am really REALLY hoping that they are a big step forward on the gaming front, and a huge step backwards on crypto mining.
    I would love to be able to sell my GTX 1080 for nearly full price to a crypto miner and upgrade to something faster that can actually play 4k games without issue. The 1080 is almost there... but frames often dip below 30fps which becomes problematic at times. A good 20% boost at 4K would be a very welcome improvement.

Log in

Don't have an account? Sign up now