Why can't they post a review yet? The exact moment the NDA lifted (9am eastern) was when this preview published. Other websites posted reviews at that time.
I'm with the OP on this one. You guys talk about other sites not having detailed information, but I refer you to PCPer's review. They somehow were able to perform full frame rating analysis on their full test suite, address the 2/3-way SLI concerns, and discuss the architecture in detail. So, holding Anandtech to the fire on this one is justified in my opinion.
The thing is, some of those other reviewers went to launch event and got their cards immediately. It doesn't seem anybody from Anandtech went, so they probably had to wait for a review sample.
nvidia seemed to invite everyone and their mother to the unveil. There were youtubers there with under 500K subscribers. I'd say if anandtech didn't send anyone to the event...then they missed possibly the biggest Tech reveal of 2016 so far.
From what I've heard, NV only provided functional drivers wednesday of last week. So all those sites had less than a week to review a major release. Some sites bit down and made it happen anyway, but others had other obligations making that impossible.
I honestly don't understand why people gripe about paper launches. Hopefully the people who gripe about paper launches are not the same ones who are checking wccftech every day for new "leaks" about Tech products. Paper Launch = giving people official specs about a card and allowing informed buying decisions even before the cards come out. What does it matter if they paper launch the card in May and physically launch the card in June, if there is no way they can physically launch until June anyway?
Anandtech's preview has nearly as much information as plenty of other sites' full reviews, and their full review will put the others to shame. Patience is a virtue!
30% more performance than the 980 Ti/Fury X is decent, although in the (one?) DX12 benchmark I see, the fact that the 1080 is only 10% ahead of the Fury X bodes pretty well for AMD, I think. I'd really love AMD to push out a GPU somewhat capable of 4k gaming, even if the settings aren't necessarily maxed, and then pair it with that new MG24UQ. I'm pretty interested in the 1070 for now, since that's what AMD will compete with.
Possible typo, I think - "roughly speaking the GTX 1080 should be 3x faster than the GTX 980 or Radeon HD 7970," I think you meant to say the GTX 680.
They won't. The Fiji GPU + HBM + interposer is too expensive to sell at fire sale prices. I could see them getting it down to $400, but that would still be more than 1070, and slower as well, so pretty pointless.
There are rumours that AMD might come out with Vega this October. Then again the GP100 chip can be released to the consumer space early too if NVidia feels the need to respond. All in all, I'm hoping for pretty darn amazing high-end MXM cards from both sides. This generation is rather exciting!
There are always rumors of AMD releasing something. There were rumors that the Fury cards would be released up to 9 months before they were eventually released. For the past few years AMD's release schedules go backwards, not forwards. I'll believe a 2016 Vega release when I see it. Did AMD even say when in 2017 Vega is supposed to arrive? They have it sitting there at the very beginning of 2017 in their slide graphic, but that's hardly something they'd have any trouble ignoring (in terms of PR) if it actually comes out in Q2 2017.
Well, they have a lot of wiggle room in this. If they release it early, it means lower yields, which means lower initial margins, and possibly lower out of the box clock speeds, but they do get the performance crown (at least before the 1080Ti is released). They can however release it early, then release a 'GHz' edition after Ti is out to compete with it as needed.
Well, there's been rumors, and support for the Vega 10 chip has been added to Aida64 two weeks ago. So maybe October might not be so wrong as the rumors say.
I'm also more interested in where the 1070 falls into place at this point. nVidia is milking the early adopters tax a little too hard for my tastes with the 1080. Hopefully, the 1070 performance will largely negate the this to some extent.
We're also looking at stock speeds. By all accounts, the custom 1080s will probably be able to push at least another 20% performance with overclocking. Hopefully much more with additional power connectors (I'm hoping for a 2.5 GHz overclock)
By virtue of having a much better cooler, the 3rd party models should clock higher. 80c seems so high. (my OCed 770 at 1.25GHz hits 62c tops with a PNY XLR8 three fan cooler)
Not likely. If you look old Intel processors with bigger production node and new Intel processors with smaller production node, the overclocking potential has not been too good. There are benefits when going to smaller production node, but high overclocking is not one of them!
@zoxo: "Don't forget that you should compare the GP104 chip to the GM104, so vanilla 980, as the 1080 Ti will come down the line"
Architecturally, yes. That is the comparison to make. However, from a consumer standpoint, the 1080 is positioned as the new halo product and it comes in closer to the price of the older halo product (980Ti). Thus, until such a time as a new halo product (1080Ti? / Titan?) emerges, it will be compared to the 980Ti. Don't forget, the 980 was compared with the 780Ti and there was a significant time gap before the 980Ti hit the scene. I doubt a 1080Ti will be short in coming.
"After being on 28nm since 2011, PC hardware is finally starting to get interesting again. Pascal, Polaris, Zen, exciting times. "
Couldn't agree more. Even though this card makes my 970 look pathetic, it also makes me very happy to see these performance gains in an age where it looked like the entire PC market had stagnated. If you ignore the price premium, this kind of leap over the 980 (which is the card that the GTX 1080 technically replaces), reminds of previous grand GPU launches like the Radeon 9700, GeForce 4, GeForce 8800 etc.
Even the 380 dollar 1070 should be a decent leap following this, as it still has Titan X beating performance. I hope both camps drive forward the 200 dollar price point performance.
So the recent sale price for the GTX 980 Ti at $500 is where it should be priced considering the performance and cost of the GTX 1080 is about an equal percentage higher by both metrics. That's personally not what I'd call a good value, but when you have a monopoly at the high end of the graphics card spectrum, that's about what I'd expect in terms of pricing.
benchmark I was looking at (doom at 1440p @ guru3d) had 1080 21.7% faster than 980 ti Newer prices of 980 ti ($500) vs 1080 ($600) are 20% higher seems on point
Which is likely why we see a marked improvement in some cards performances overall. I'd have expected some of those to be further behind than they actually are.
This card exists in the zone of being overpowered at 1080P and 1440P and still a hair too slow at 4k in some titles. I guess we will have to wait until the big die 14/16nm gpus come out before we get no compromise 4k.
If you have a 980Ti, overclock it (it probably already is) and it's going hand to hand with the stock 1080, no need to upgrade really.
Looking at the reviews right now, I doubt the 1070 will even touch the 980Ti, and if they keep the EU pricing up, highly overpriced at 460-520EUR, for what it delivers. So I'm just gonna wait for what AMD brings to the table.
Or wait for 3rd party models. Techspot showed pretty good gains from OCing, so the big coolers(or liquid cooling) that can hit 2+GHz are going to be where the 1080 shines.
If you're not trying to drive 4k resolutions, there probably isn't a compelling reason to upgrade from a 670 until the generation after the 10x0. I do agree that the 1080 strikes me as "yet another GPU refresh" because the performance increase isn't significant and the power/thermal numbers are only holding steady despite the more efficient FinFET process.
I'm still interested in reading the full review, but at this point Ryan's comment about AMD's possible future plans - "Rather the company is looking to make a run at the much larger mainstream market for desktops and laptops with their Polaris architecture, something that GP104 isn’t meant to address." is far more interesting to me since I'd like to see the new process node put to use in laptops and in lower end portions of the GPU market.
The GP106 (presumably 1060/1050) is due out in the fall.
AMD will initially have the middle of the market to itself. It'll be interesting to see how well they're able to exploit it though. Not having a true flagship until Vega launches will hurt them among the large body of ignorant consumers who look at the headline numbers for top of the line cards because they're the most visible and buy based on that; a problem that's been dogging them for the last few years as nVidia has grown its market share.
The biggest question is if the lack of a flagship at launch is due to due to the unavailability of HBM (ie Vega doesn't have GDDR5/x memory controllers at all) or a deliberate decision to go for the center of the market first; or is an indicator that GloFo is struggling on 14nm yields. The latter is alarming if true; since it would mean that despite probably being able to crush nVidia in the mid-range for the next few months limited availability would prevent them from being able to exploit their lead effectively at a time when AMD desperately needs a cash cow generating win somewhere.
Yes, there's no question that there'll be some lost sales over the impression of market leadership that spills over into unrelated segments where that competition for the fastest high end GPU isn't really relevant. Buyers who don't look at the value proposition of the specific product they're purchasing relative to its price bracket are pretty commonplace when it comes to computer components. I think it might hurt AMD's bottom line, but maybe not if the mid range volume is high enough to offset those lost sales.
With respect to 14nm yields, I'd think that positioning the company to tackle the middle of the GPU price/performance market would be exceptionally unwise if there were problems with yield so I'd don't think it's worth worrying much about. Lower end GPUs use less wafer and that might offer an advantage, but lower priced cards sell in larger numbers typically than the top end cards so the demand will be higher and expectations for fab output might be higher as well. I'd like to think the decision is deliberate. AMD has also exhibited a history of targeting unfilled or less well served segments in order to find a niche that generates sales when they aren't in a position to lead in performance as demonstrated by their dropping out of the high end CPU market. That might be a bad strategy though since it hasn't done them any favors at retaining CPU market share and it does look like they're following a similar course with graphics.
I'm not sure what to think really, but I will be keeping an eye out for AMD's upcoming graphics products as they're released since they may offer more value for the dollar. I don't really need a lot of GPU since I keep resolutions low and use Steam's streaming exclusively now, but I would like to upgrade out of the GT 730 with a 16/14nm card that offers a little more of everything, but stays in a reasonable power budget.
The problem with low end amd cards atm is they lack features. Give us a $150-200 card with 4k 10bit h/w 265 decode, hdmi2, dp 1.4, etc and moderate gaming performance and it will sell. Give us great performance/cost and shitty features. Watch it sit on the shelf.
The 730 was a cheap upgrade I did to get a hotter running and far older GeForce 8800 GTS out of my system last year to take some load off the power supply (only 375 watts) so I could upgrade the CPU from a tired Xeon 3065 to a Q6600 without pushing too hard on the PSU. The only feature I really did bother with making sure I got was GDDR5 so the chip wasn't hamstrung by64-bit DDR3's bandwidth issues. The A10's iGPU would indeed make it look underpowered, but I'm not in the market for integrated graphics for my desktop. However, it's long overdue for a rebuild for which I'm gathering parts now. I would have considered an A10, but instead I just picked up an Athlon x4 and will carry the 730 forward onto the new motherboard for a little while until 16/14nm makes its way down the product stack into lower end cards. Since I plan to eventually purchase whatever new generation hardware is out on the smaller process node anyway, a CPU with an iGPU that ultimately ends up being unused doesn't make a lot of sense. In the short term future 730 should be fine for anything I do anyway since I have no reason to push higher resolutions or use any sort of spatial anti-aliasing. All of that doesn't really matter once the game's video and audio are rolled up in an h.264 stream and pushed across my network from my gaming box to my netbook where I ultimately end up playing any games on a low resolution screen anyway. I think something around a GTX950's performance would be perfectly fine for anything I need to do so I'm content to wait until I can get that performance for around $100 or less. Spending my fun money on a computer is a very low priority and I can always wait until later to get newer/faster hardware if I game I'm interested in playing doesn't run on my current PC. Such is the case with Fallout 4, but I won't bother with it until all of its DRM is out, there are patches that address most of its issues, and it's got a GOTY edition on discount through Steam for $20. By then, whatever I'm running will be more than fast enough to offer an enjoyable gaming experience without me struggling and grubbing around to find high end gear for it or divert money from seeing films, traveling, or dining out. I also don't have to bother overclocking, buying aftermarket cooling solutions, managing cables to optimize airflow, or any of that other garbage I used to deal with years ago...I don't know how many hours I spent playing IDE cable origami so those big ribbons wouldn't impede a case fan's air current over a heatsink so I could eek out one or two meaninglessly fewer degrees C on an unimportant component. Now, screw it, I put crap together once and forget about it for a few years, enjoying the fun it provides on the way because I finally figured out that the parts are just a means to obtain a few hours a week of recreation and not the ends themselves.
I understand that the idea of someone playing casual games while also keeping tabs on computer hardware is somehow a really threatening concept, but don't let it cloud your thoughts too much that you assume that Angry Birds and Fallout 4 are mutually exclusive. You can be smarter and better than that if you try.
They won't have the middle of the market completely to themselves. They'll have the only new cards in the segment for 2 or 3 months. But during that time those cards will be competing with the 980 and 970. AMD, on the other hand, probably can't make much money selling Fury cards priced to compete with the 1070, and they'll have virtually nothing competing with the 1080, and that situation will last for 6 or more months. That's the reason AMD will be hurt, not because of "ignorant customers", as you claim.
As an aside, if consumers were ignorant to choose new Maxwell cards over older AMD cards competing against them, why will they not similarly be ignorant to choose new Polaris cards over the older Maxwell cards competing with them?
I fail to see how chosing old tech over new tech for a price difference of few euros is the smart thing to do. Everyone wants new tech, it's a psychological and practical factor.
As an example, where I'm living, in winter we can have -10 or -20 C but in summer is not uncommon to exceed 40C. For me power consumption is a factor. Less heat, less noise. The GTX line is well worth the money.
Isn't it over 4 times faster than a 670? If the 670 still works for you will something being 10 times faster make a difference? Are you looking to jump up from 1080P with low qualities settings to 4K with high quality settings?
Funny, I'm thinking of replacing my 770, but with a 1070, because lower TDP, faster memory and more of it and more eye candy. Granted, I may find I need to upgrade the CPU too but that's life, and it's approaching five years old anyway...
Wait for GTX1070 review to decide. I would estimate that if you have the patience to wait for after-market $380-400 GTX1070 cards, there will be a better buy than a used 980Ti.
"Translating this into numbers, at 4K we’re looking at 30% performance gain versus the GTX 980 and a 70% performance gain over the GTX 980, so we’re looking at "
What about including tht HTC Vive on your benchmarks? If you talk about the VR benefits, you have to show them in graphs, it´s you speciality AnadTech! ;)
Seconded. At this point VR gaming is much more interesting to me than even 4K gaming, and will drive my video card upgrades from now on. It's really nice to be able to play a game like it's the real world, rather than using a controller and looking at a screen.
Completely agreed. I'm a casual gamer, and my i5-2500k + GTX760 serve me perfectly fine. I have a 1440p monitor but I reduce the resolution to 1080 or 720 demanding on how demanding the game is.
My upgrade will be determined and driven by VR. Whoever manages to deliver acceptable VR performance in a reasonable price will get my $.
And they will be competing in price and content against the PS4k + Move + Morpheus combo.
It will be interesting how much GDDR5X affects the scores vs GDDR5. 1080 vs 1070 will be very telling or in the alternative a downclocked 1080 vs a 980 Ti ....
Translating this into numbers, at 4K we’re looking at 30% performance gain versus the GTX 980 and a 70% performance gain over the GTX 980, amounting to a very significant jump in efficiency and performance over the Maxwell generation. That durn GTX 980 is just all over the board!
How does Pascal do on async compute? I know that was the big bugbear with Maxwell, with Nvidia promising it but it looking like they were doing it in CPU for scheduling, not GPU like GCN.
That's what AMD provided. A custom cooled nvidia 980ti will perform better then the stock model, yet people dont complain about that.
When anand DID use a third party card (460s IIRC) there was a massive backlash from the community saying they were 'unfair' in their reviews. So now they just use stock cards. Blame AMD for dropping the ball on that one.
I'm hoping the full article compares it to cards outside the same class, like the 970. It's hard to judge upgrade effectiveness if cards are only ever compared against other top-end cards.
"I also wanted to quickly throw in a 1080p chart, both for the interest of comparing the GTX 1080 to the first-generation 28nm cards, and for gamers who are playing on high refresh rate 1080p monitors."
THANK YOU, high refresh seems to be swept under the carpet for 3.8k displays. The old 780 I have is trying it's best to remain relevant, but can't seem to make a 144hz panel worth it's while.
Ashes is a game that I only intend to run in DX12. For all intents and purposes it's the marquee DX12 title, and I expect hardware vendors to be able to handle it well. Especially as its engine was more or less designed for low level APIs from the start.
Hitman, on the other hand, had its DX12 implementation essentially bolted on after the fact.
More half assed content from Anandtech. I'm not even surprised anymore. Can't wait to hear more excuses from Ian, thats what the audience really want right Ian? Keep coming back hoping you guys will get your shit together, but I think I'm ready to say good bye.
Every single other outlet on the net with a 1080 review has achieved more than Anandtech, how do you think that reflects on you?
But I also don't make any apologies for how I've chosen to publish this. I had 4 days to work on this, and that's not sufficient time for a full AnandTech quality review.
Don't sweat it Ryan, I want an in-depth look into Pascal architecture and I really want to see how Pascal IPC compares to Maxwell's, my bet is it's about 10-15% lower overall.
If AnandTech was not included among certain sites hand picked to receive early review samples, that may actually reflect quite well on their editorial integrity.
Also, really not feeling the time urgency you seem too. It's not yet even possible to order the card, and a lot of related information that some would consider important -- ie 3rd party cards and their performance -- isn't anywhere close to being on the table either.
"If AnandTech was not included among certain sites hand picked to receive early review samples, that may actually reflect quite well on their editorial integrity."
To be clear, we received our sample at the same time as everyone else. The issue was that I had another (previously scheduled) function to attend when those samples were distributed. No malice or anyone's part, just bad timing all around.
", and in time-honored fashion NVIDIA is starting at the high-end."
Come on, at least acknowledge the fact NVIDIA are actually releasing a video card based on medium-sized GPU - the GTX 1080 - and marketing it as a flagship with a price to match, even more.
No need to comment on the fact; no need to criticize NVIDIA for seeking huge margins in the consumer sector ever since Kepler, delaying the real high-end GPU, or any such thing. Just let your readers know the GTX 1080 is based on a mid-sized GPU which is not the GP100 flagship to come from the get go.
A true time-honored fashion for NVIDIA would be releasing a new architecture with the biggest GPU and charging $500 from day one. Something that last happened with Fermi.
It beats every other GPU on the market, if thats not high-end... High-end is a moving target, its whatever is the fastest at the time of writing.
Certainly, it could be faster - it always can be. But GP100 just doesn't have the availability yet. They could wait longer and then launch your true "high-end" first, but instead we get new toys sooner, which is always a good thing.
On the contrary, nowadays we get more performance late, and we pay double for it. We used to get the large-sized GPU first, with a new architecture - just like the GTX 480 - ever since Kepler, however, we've had to wait a year after release. In the meantime, NVIDIA have been charging flagship money for the medium-sized GPU - like the GTX 460 - and releasing the vulgar, super-high margin Titan somewhere in-between. Essentially, by the time the 780 Ti, the 980 Ti, the Titans and even the very cut-down 780 came out, they were already outdated products as far as technology goes, but still carried a premium price tag. Why is that so hard to understand?
As far as performance goes - of course a new architecture on a new node will be significantly faster, there's nothing amazing about that. That doesn't mean a video card based on a mid-sized GPU should be marketed as a flagship, as the best thing since sliced bread, and carry such gruesome price premium - $700 for "irresponsible performance" - give me a break! - the only irresponsible thing is blind consumers eating this up. That's why we need competition.
Keep making excuses for big companies, and see how they keep increasing pricing, delaying products, cutting features, and doing whatever the hell they want. Guess who gets screwed as a result of this - that would be you, and me, and every other consumer out there. So keep at it.
Just to clarify a bit more: going into Kepler NVIDIA were quite nervous about how consumers would react to all this, and although journalists, including Anandtech, noted that the GTX 680 was not a direct successor to the GTX 580, but rather the new GTX 560 Ti, and as such was essentially twice as expensive, it didn't seem to bother consumers perhaps because, as you say - it's so new and fast. Whether it's really because consumers are misinformed, don't care, or a combination of both is irrelevant - it's now history. NVIDIA managed to get away with it. It has been that way ever since. And now, with Pascal, they're looking to expand on it all and charge even higher - up to $150 extra as noted at the end of this article. They might be looking to establish a great premium for overclocking capabilities as well. A sort of Intel K-series, but on top of a product that is already very expensive.
The Titan-class cards are just the other side of this story. After a successful GTX 680 launch, NVIDIA decided to try and do the same with the large-sized Kepler GPU. On top of delaying the flagship product - the GK110 - they decided to, again, charge essentially double. And thus the original Titan was born. They were so nervous about it that they decided to enable serious compute performance on it so that if it fails in the consumer sector, it'd sell in the compute world. It outlived their wildest dreams - apparently, people were not only willing to throw money at them, but didn't know any better either. And so we put the writing on the wall, and we've been reaping the "benefits" ever since. It looks like we'll do the same again.
If you do I'd wait till AIB card become available. The reference 1080 OCs like crap compared to Maxwell. The 980ti reference OC got about 20%-25% performance gain on OC, the 1080 gets about 10%-12%. If you have an AIB 980ti you might even be getting more on the OC. So to sum it up an AIB OC 980ti is only slightly slower 15%-20% than a OC 1080.
Looks like I get to eat my words about posting "doom and gloom" about a Friday 6pm press event. They didn't have any real "bad news" (although the reason for refusal to demonstrate 'ray traced sound' was clearly a lie. You can simply play the sounds of being in various places to an audience as easily as in a movie as in VR). I wouldn't call it terribly great news either, just the slow and steady progression of a company without competition.
Looks like it competes well enough against the existing base of nvidia cards. It also appears that they don't feel a need to bother worrying about "competition" from AMD:( (Note that Intel appears to spend at least as many mm and/or transistors on GPU space as this beast. What they don't spend is power (Watts) and bandwidth. The difference is obvious (and I can't see them trying to increase either on their CPUs).
One thing that keeps popping up in these reviews is the 250W power limit. This just screams for someone to take a (non-founders' edition) reference card and slap a closed watercooling system on it. The results might not be as extreme as the 390, but it should be up there. I suspect the same is true (and possibly moreso unless deliberately crippled) on the 1070.
"Note that Intel appears to spend at least as many mm and/or transistors on GPU space as this beast"
I don't think that's accurate at all. To my knowledge Intel haven't released specific die size or Transistor counts since Haswell. But the entire CPU package of a 4770K is ~1.4B transistors (~one fifth of a GP204 GPU). Anandtech estimated ~33% of the die area (roughly 500M transistors) was dedicated to the 20EU GT2 GPU. Obviously the GT2 is hardly Intel's biggest graphics package, but even a larger one like the 48EU GT3e package from the Broadwell i7-5775C must surely still have significantly fewer transistors than a GP204.
When you do a full review, could you spear a thought to some of us, who are not into gaming. I would like to know about audio side (sample rates supported etc.) as an example, and a proper full test for using it with madVR (yes, we know it supports the usual frame rates etc.). Some insights into 10/12bit support on Windows 10 (not just for games & madVR DX11 FSE) inc. generic programs like Photoshop/Premiere etc.
On a side note: if you're not into gaming, but prefer 4K@60p dual screen setup with 10bit colour, which GPU is best?
Architectural changes. By the end of the year, there will be some 4K HDR monitors. Maybe even 120p. If I want to edit in Premiere with dual 4K HDR 120p screens, or I prefer a 5k screen over a single cable connection, what are my GPU choices? DP 1.3?
I also mentioned 10bit support (not Quattro) and madVR. It's not this card (specifically) I'm interested in, but the architecture. There will be cheaper cards in the future for sure, however, they will use the same tech as here. Hence my curiosity.
"Some insights into 10/12bit support on Windows 10 (not just for games & madVR DX11 FSE) inc. generic programs like Photoshop/Premiere etc."
You're still going to want a Quadro for pro work. NVIDIA is going to allow 10bpc support in full screen OpenGL applications, but not windowed applications.
That's a bummer. Currently, I have 3x screens connected. 2x desktop monitors and 1x for HTPC trough the amp. If I wanted full hardware HEVC 10bit decoding, DP1.3/1.4 for 2x 4K or 5K HDR monitor over 1x cable, I need to give up 10bpc support for windowed apps. Or, go with something like the quadro M2000 with non of the latest goodies (DP 1.2, HDMI 2.0b, full HW decode HEVC 10bit, HDR etc. etc.). It will be quite a while before any new quadro support them. Regardless of price.
To be clear, you get 10bpc support for windowed D3D applications, so your HTPC idea will work.
The distinction is for professional applications such as Photoshop. NVIDIA has artificially restricted 10bpc color support for those applications in order to make it a Quadro feature, and that doesn't change for GTX 1080.
"Gamers however won’t be able to get their hands on the card until the 27th – next Friday – with pre-order sales starting this Friday." I hope this is true. I don't want to have to stay up all day hitting F5 until i secure my 1080FE
Looking at an unopened Gigabyte R9 390X G1 that I picked up for £250 (standard price for an R9 390X is £330-£360 in UK money) . This is getting 50-66 percent of the framerate of the GTX 1080, but for slightly less than half the price ($599 for the non-Founders edition translates to roughly £500 inc VAT).
Knowing what we know no about likely performance of upcoming 14/16nm products, should i be sending it back?
Ryan, please consider integrating this in your upcoming review of the 1080. It would be extremely useful: Clock the 1080 just like the 980 and then compare their performance. I would like to see how much of that 15-FPS-on-avg increase vs 980Ti comes from clock speed increase and how much of an impact does Pascal actually have. As it looks right now, the 1080 is a disappointement - I was expecting something truly stellar from nVidia after touting this and that and making serious all-around changes , taking advantage of a process node half as big as the previous one...so far 1080 is shaping to be just an incremental upgrade if not even a sidegrade when clock speed differences are negated. I hope I'm as wrong as one could be, though. Good preview so far!
Doing some quick calcs w/ BF4 FPS numbers gives 1080 - 111 FPS/MHz/core, 980Ti - 150 FPS/MHz/core, 980 - 75 FPS/MHz/core for 4K. The 1440p and 1080p numbers also follow suit (150/206/102 for 1440p, 231/319/159 for 1080p).
Essentially, doing meaningless number crunching does show that normalized the 980Ti is better per MHz per core, at least for BF4. I used the boost clock numbers for the MHz. Hope it is investigated because it seems like Nvidia spent extra transistor bank on other aspects significantly (maybe F16 compute?) to the detriment of their F32 gaming chops.
I can already tell you right now that at an architectural level, per-clock per-core performance between GP104 and GM204 is virtually identical. Throughput of the ALUs, texture units, ROPs, etc has not changed. What makes GP104 faster than GM204 is the larger number of SMs, the higher clockspeeds, and a memory subsystem fast enough to feed the former.
(Which is not to discount the Pascal architecture. 16nm FinFET alone won't let you ramp up the clockspeeds like this. NVIDIA had to specifically engineer the architecture to hit those clockspeeds without driving up the power consumption)
There's only one thing I want to know about this card. Does it support the special instructions that the Tesla P100 has for half precision float (FP16), which double throughput? This is very important for deep learning and nobody has confirmed yet.
I'm 100% sure you're wrong, because the GP106, or something like it, will be used in the Drive PX 2 and will have double throughput half precision support since it's going to be used as a machine learning inference engine. If the GTX 1080 doesn't support double throughput half-precision support it's probably because they purposefully disabled it to prevent the cards from being used in high quantities for compute workloads. They will probably, at some point, come out with a Tesla based on the GP104 and/or the GP106 that does support double throughput half-precision compute, to replace the M40 and M4 cards. Pascal does everything better than Maxwell so it would be starving a growing industry to leave that segment to Maxwell for too long.
Huh? We know next to nothing about GP106 architecture unlike with GP100 and GP104 chips which like I said have different architectures with GP104 (GTX 1080) being almost identical to Maxwell on a hardware level.
Not necessarily. In fact I would be very surprised if GP104 doesn't support the double FP16 throughput. Like yojimbo said, the more likely scenario is that half precision performance is capped in some way on GeForce cards (likely through firmware).
When we're talking about an architecture, we're speaking about an instrustion set. Since GP104 lacks certain compute instructions compared to GP100 in P100 then we can accurately say they have different architectures. Yes they are both Pascal and they have the same featurees on software-level but hardware-wise they are different. Doesn't matter how Nvidia enforces those differences, what matters is that they're different at an architecture-level(instruction set).
Why is everything 100% with you? Neither of us know 100% anything about this issue. And the fact that half precision at double throughput is not possible on the GTX 1080 does not mean that it's not possible on the GP104.
Further explanation of what you said "huh?" to: NVIDIA revealed the Drive PX 2 at both CES 2016 and GTC 2016. It has two Pascal-based Tegra chips and two larger Pascal GPUs. The main purpose of the Drive PX 2 will be to run inference algorithms for self driving cars. There are large portions of these algorithms which only require FP16 precision. NVIDIA would be leaving performance on the table if they didn't include the FP16 throughput enhancements in whatever chips they are using for the Drive PX 2. And those GPUs are definitely not GP100s. Unless they specially designed another GPU that is based on the GP100, but much smaller, they are probably using something along the lines of a GP106 or GP107 for that purpose.
I'm guessing it's easier to design 6 GPUs and put FP16 enhancements in all of them then it is to design 8 GPUs and put FP16 enhancements in 4 of them. I don't think you have any reason to believe it's so difficult for them to put the FP16 enhancements into GP104. (They had already done so for the Maxwell-based Tegra X1, by the way.) You just seem to want to believe things which fit into your preferred narrative of "GTX 1080 is almost identical to Maxwell".
@vladx They're all based on the same underlying architecture (Pascal). I'm actually not sure why you think GP104 is closer to Maxwell architecturally than GP100. Are you referring to the SMM layout?
They have competition already with Xeon Phi and CPUs. The trouble with AMD's GPUs for deep learning is that they don't have nearly the same level of library support as NVIDIA's GPUs do. Intel is also hoping to adapt FPGAs for deep learning purposes, I think, but I doubt that's going to help you out much.
Each new gen sees around an extra 10/14fps being added to the top card over the previous gen. No. No thank you. These companies keep DRIP FEEDING us small advances and, obviously, this is business.
Spend your cash, fine, but they're laughing at us each time. (I have an ebay 980)
Though the move was from Maxwell to Pascal, looks more like Paxwell, Maxwell on steroids - 70% clock, 30% compression, not much innovation. And that PCB is a disgrace, skimping on the 6th phase, and only one mosfet per VRM phase - weren't they speaking of premium components thus the added premium, certainly doesn't look premium.
Credit goes to Dan on this one. He took the time to go through it and find a performance-intensive section we could reliably benchmark, which is always a challenge with RPGs.
So 25-30% faster then a 980ti, few of those were less, few were more. Lets just call it 30%. And it costs about $100 more then a 980ti, so call that 15%.
So just call it 15% better value over a 980ti. Thats not very impressive for 28->16nm.
The 1070 will be more interesting to see numbers on. As will AMD's response.
I'm really curious what the 1070's TDP will be like and if it'll still be like 70+ fps with maxed quality settings at 1080p... Might be the perfect sweet spot for a HTPC that you also want to basically act as a console game system :)
Benchmark comparisons? When putting a new card up against the older cards, are the numbers for the older cards from the latest drivers available or year old numbers that may have change with driver updates?
I think the author may have missed that what matters in node shrinks is the relative node size, not the absolute difference of the shrink.
Meaning it's not surprising that 28nm to 16nm is a larger performance increase than going from 58 to 40. Comparatively speaking, the transistors shrunk much more in the latter.
kudos as always for being thorough. no overclocked cards, no gameworks, a balanced set of games that have an equal share favoring either nvidia or amd, that makes a balanced review, a rarity these days and highly appreciated.
Is it fair to publish relative power consumption in Crysis 3 of the 1080 vs other cards when the 1080 is pushing twice the frame rate of some of those cards? Seems like a better comparison would be to lock the framerate such that the lowest-end in the list can keep up, enable vsync, and test power consumption when the cards are doing the same amount of real work.
Would it not still show the same relative differences though? The net power consumptions would obviously be different but the relative differences would simply reflect process size, transistor count, clock, etc; i.e. nothing that would be particularly surprising. Nor useful, I should imagine, even to the user for whom power consumption does matter - would such a user discount a product as being a potential purchase because it does not fit within a required power window or would they examine how best to deal with the additional power requirements and heat generation? Personally I only look at the power figures to gauge how hot my office is going to get :)
Glad all the details of the founders edition are out. Almost bought one. Fortunately was able to cancel my pre-order after finding out there is nothing special about it aside from the name.
Fingers crossed a good closed system water cooled 1080 comes out soon!
Why is there no computing tests? Always Anandtech doing comp. tests and now, during the presentation of the new architecture of a sudden it disappears. Why?
I've seen this alredy in tables in some other articles, so got to ask: since when is memory clock measured in Gbps? Doesnť make any sense to me, one would say clock is measured Hz, while Gigabits per second goes for bandwidth, hm?
The memory clock isn't used for determining processing capabilities, as such, unlike with the GPU core itself, where it is used to determine peak FLOPS, pixel read/writes, etc. In the case of memory, all that matters (on face value) is how much data can be transferred to and from it and this is indicated by the Gbps - the "memory" can shift, say, 7 Gbits per second and because the "bus" is 256 bit wide, the total bandwidth is 1792 Gb/s (or 224 GB/s). So one might ask, why not just quote the bandwidth? This used to be the case, simply because bus width across different vendors and SKUs were remarkably similar compared to the broad variation one sees now.
Anandtech is a good site that I have visited frequently since the early 2000s to quench an insatiable appetite for primarily CPU&GPU reviews. However there are now sites out there, one in particular, that are doing it better than Anandtech and consequently I don't spend so much time here.
It is now almost a week since the embargo on GTX1080 reviews was lifted and previews aside, there is still a deafening silence from Anandtech. Yes the apologists will argue Anandtech does a deeper review, give them time and all that but seriously when your review is this late, it begins to look like incompetence. Or perhaps you consider your reviews to be elitist, the holy grail among tech websites and that therefore any delay is acceptable? What pressing projects are the GPU staff working on that could explain this state of affairs?
If you own a 980 Ti... No need to get overly excited/worried about upgrading to this particular GPU, unless you just have to have the "latest and "greatest". Initial testing was done vs. the GTX 980? Rather odd and goes back to my initial statement.
On the other hand, if running a 970 (or below) and 980 (apparently, according to NV) or 700 series GPU, this presents a very good upgrade-path.
Time will tell what will happen with that MSRP however, once vendors start doing their add-on's; cooling solutions, factory OC's, software bundles, etc...
I'm shocked every day that goes by that no GTX 1080 review exists yet. The move to 14-16nm has got to be on of the most highly anticipated in recent history an all we have is a preview?? BTW I don't need a tech site to price check the 6700 CPU for me, i can do that myself.
I thought the main purpose of a review was to give a potential customer an idea of what to expect, and some expert analysis, and some honest judgement on a product and whether or not it's worth your money as a consumer. The card releases in two days and if you don't have a review out by then what is the bleeding point
I'm kind of expecting it tomorrow, the day before the product gets into people's hands. I know pre-orders already started, but those people were probably going to buy regardless.
Not up this morning, don't they usually post content overnight? They were well behind the pack in publishing a Fury X review as well. With a review this late, I hope they do some extensive overclocking and compare it to SLi and Crossfire. Heck if they don't get the review up by tomorrow, maybe they can benchmark the 1080 in SLi
Yeah, this is starting to get a little worrisome. Ryan mentioned getting it out sometime last week, and it's getting close to a whole extra week on top of that now...
Maybe he found something interesting to test and wants to confirm before publishing? We won't know until it's posted obviously, but at least I'm not chomping at the bit till I see what AMD is offering this year.
10 days later... Today is the official "release" day... Nothing new? How about posting a 95% done analysis, and let us know what you're still working on? That would be a lot better that deafening silence.
Where could I read analysis of new Nvidia Geforce GTX 1080?
I'm asking because haven't followed other sites for long time, but I'm now so fed up with this. Broken promises of review and then nothing but silence? I will still come to Anandtech first, but I'm not going to wait for 10 days for important review!
@Ranger1065 Would you please elaborate your previous comment?
anandtech is such a f***ing joke nowaday, most tech reviewer already publish their GTX 1080 & 1070 reviews today, and yet anand still stuck with 1080 PREVIEW. Hahaha what a joke......
Anandtech sure is slow. Other major sites have already beginning to post reviews of the GTX 1070 while this site hasn't even posted a review of the GTX 1080 which came out days ago...
While I have been advocating for AT, this hypothetic 1080 review could just go in a dust bin now. Everywhere over the web there are lots of detailed reviews for 1080 and now also for 1070. In the meantime since this preview Ryan posted two more articles. Unless his review GTX-1080 sample malfunctioned there is hardly any excuse for such a huge delay.
Starting to wonder if perhaps they found some crucial problems and have been spending this time trying to run them to ground. But IMHO that doesn't make the situation much better as in that case they should have updated us at some point with said fact.
Why bother at this point? Using the exuse that such a in depth article takes time to write and telling everyone it's almost finished weeks ago is flat out dishonest especially considering its been happening on a consistent basis.
Can't keep using the exuse that Anandtech's articles are very in depth and take longer, that might of been true years ago but not anymore, not when sites like Pcper and Techreport go just as in depth and by some miracle of God are able to get the article out on time.
The main site and the forums have been going downhill here for a while now so don't hold your breath on the "full review".
How to fade into obscurity, by the artists formerly known as Anandtech.
Take over 4 weeks longer than some kid on youtube to review the fastest GPU to date, with the first die shrink in years ; don't bother at all to review the 960 or the S7 (completely); not sure what else you need to do, this is a pretty good way to sewer your company.
At this point I think it's safe to say there will be no review for the 1080. Most likely no 1070 review either considering the cards have been out for a while now.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
262 Comments
Back to Article
Strafe - Tuesday, May 17, 2016 - link
Preview?! Anandtech, I am disappoint.tipoo - Tuesday, May 17, 2016 - link
They can't post a review yet...Qwertilot - Tuesday, May 17, 2016 - link
They also have some quite strict standards regarding what counts as a full review :)ImSpartacus - Tuesday, May 17, 2016 - link
Yeah, gotta love their "previews".Still has tons of commentary and a fantastic look at the major metrics that people care about. I've seen reviews from other sources that were worse.
Korruptor - Tuesday, May 17, 2016 - link
Why can't they post a review yet? The exact moment the NDA lifted (9am eastern) was when this preview published. Other websites posted reviews at that time.tipoo - Tuesday, May 17, 2016 - link
It's three paragraphs in. Quickies are for other websites ;)Devo2007 - Wednesday, May 18, 2016 - link
If you think a site like HardOCP did things quickly, you are mistaken.Sadly, a lot of things with Anandtech have been published at a leisurely pace as of late (especially phone reviews). This one doesn't surprise me
Pissedoffyouth - Wednesday, May 18, 2016 - link
Complete given up on expecting a SD820 deep dive, and the Galaxy S7 part 2. Do they even do phone reviews anymore?Devo2007 - Wednesday, May 18, 2016 - link
Well they did dodo the iPhone SE review. Nothing on the G5 or HTC 10 thoughDevo2007 - Wednesday, May 18, 2016 - link
Do - LOL @ phone typing "dodo"Eleveneleven - Monday, May 23, 2016 - link
Yeah they took forever to post their SurfaceBook review compared to everyone else. Like really lazily slow.Stuka87 - Tuesday, May 17, 2016 - link
Its not that they can't, its that a full Anandtech review is gigantic. And takes more than 2-3 days.lunarx3dfx - Tuesday, May 17, 2016 - link
I'm with the OP on this one. You guys talk about other sites not having detailed information, but I refer you to PCPer's review. They somehow were able to perform full frame rating analysis on their full test suite, address the 2/3-way SLI concerns, and discuss the architecture in detail. So, holding Anandtech to the fire on this one is justified in my opinion.Margalus - Tuesday, May 17, 2016 - link
The thing is, some of those other reviewers went to launch event and got their cards immediately. It doesn't seem anybody from Anandtech went, so they probably had to wait for a review sample.HOOfan 1 - Thursday, May 19, 2016 - link
nvidia seemed to invite everyone and their mother to the unveil. There were youtubers there with under 500K subscribers. I'd say if anandtech didn't send anyone to the event...then they missed possibly the biggest Tech reveal of 2016 so far.Beararam - Wednesday, June 15, 2016 - link
That's garbage. All the other GPU reviews (the flagships) have been done on release day.schizoide - Wednesday, May 18, 2016 - link
From what I've heard, NV only provided functional drivers wednesday of last week. So all those sites had less than a week to review a major release. Some sites bit down and made it happen anyway, but others had other obligations making that impossible.pikkon39 - Tuesday, May 17, 2016 - link
the NDA lifted today so yes, they can post reviews.bigboxes - Wednesday, May 18, 2016 - link
It's a paper launch. I expected better from nVidia. Lame.HOOfan 1 - Thursday, May 19, 2016 - link
I honestly don't understand why people gripe about paper launches. Hopefully the people who gripe about paper launches are not the same ones who are checking wccftech every day for new "leaks" about Tech products. Paper Launch = giving people official specs about a card and allowing informed buying decisions even before the cards come out. What does it matter if they paper launch the card in May and physically launch the card in June, if there is no way they can physically launch until June anyway?poohbear - Thursday, May 19, 2016 - link
all the other sites posted a full review. but what everDrumsticks - Tuesday, May 17, 2016 - link
Anandtech's preview has nearly as much information as plenty of other sites' full reviews, and their full review will put the others to shame. Patience is a virtue!30% more performance than the 980 Ti/Fury X is decent, although in the (one?) DX12 benchmark I see, the fact that the 1080 is only 10% ahead of the Fury X bodes pretty well for AMD, I think. I'd really love AMD to push out a GPU somewhat capable of 4k gaming, even if the settings aren't necessarily maxed, and then pair it with that new MG24UQ. I'm pretty interested in the 1070 for now, since that's what AMD will compete with.
Possible typo, I think - "roughly speaking the GTX 1080 should be 3x faster than the GTX 980 or Radeon HD 7970," I think you meant to say the GTX 680.
Looking forward to more!
Ushio01 - Tuesday, May 17, 2016 - link
Go to Hexus.net for a full review then. (It's one of the 3 sites I use along with Techreport and normally here)Eden-K121D - Tuesday, May 17, 2016 - link
If AMD responds with a price void on Fury Cards. It'll be interesting to see how the market plays out 😉😉😉Eden-K121D - Tuesday, May 17, 2016 - link
Price Cut I meanBurntMyBacon - Tuesday, May 17, 2016 - link
I prefer void. It's easier on the pocketbook.Eden-K121D - Tuesday, May 17, 2016 - link
hahaextide - Tuesday, May 17, 2016 - link
They won't. The Fiji GPU + HBM + interposer is too expensive to sell at fire sale prices. I could see them getting it down to $400, but that would still be more than 1070, and slower as well, so pretty pointless.medi03 - Tuesday, May 17, 2016 - link
789€ in Germany.No thanks.
milkod2001 - Tuesday, May 17, 2016 - link
€823 in Ireland.Fckin joke.
bananaforscale - Wednesday, May 18, 2016 - link
Early adopter tax.nagi603 - Tuesday, May 17, 2016 - link
So much for a fury x price cut...just4U - Wednesday, May 18, 2016 - link
likely about $1200 in Canada.. uh.. no.qasdfdsaq - Tuesday, May 17, 2016 - link
As long as the follow up doesn't go the way of part 2 of Galaxy S7 review... 2.5 months and no news or updates. Mhmm.milkywayer - Tuesday, May 17, 2016 - link
I'm jaw-dropped by performance jump.Guess what's going to replace my 970 in a month? (assuming I'll be able to find the damn thing even with another +$100 premium here in Pakistan)
cknobman - Tuesday, May 17, 2016 - link
Performance improvement is nice but not jaw dropping.Honestly I think Nvidia has left the door open for AMD to take control of the high end later this year with the new Fury line.
I'll be waiting on making a purchase.
ChefJeff789 - Tuesday, May 17, 2016 - link
Later this year? AMD has said that Vega is not coming until next year. I'd be shocked to see it sooner.zoxo - Tuesday, May 17, 2016 - link
There are rumours that AMD might come out with Vega this October. Then again the GP100 chip can be released to the consumer space early too if NVidia feels the need to respond. All in all, I'm hoping for pretty darn amazing high-end MXM cards from both sides. This generation is rather exciting!Yojimbo - Tuesday, May 17, 2016 - link
There are always rumors of AMD releasing something. There were rumors that the Fury cards would be released up to 9 months before they were eventually released. For the past few years AMD's release schedules go backwards, not forwards. I'll believe a 2016 Vega release when I see it. Did AMD even say when in 2017 Vega is supposed to arrive? They have it sitting there at the very beginning of 2017 in their slide graphic, but that's hardly something they'd have any trouble ignoring (in terms of PR) if it actually comes out in Q2 2017.zoxo - Tuesday, May 17, 2016 - link
Well, they have a lot of wiggle room in this. If they release it early, it means lower yields, which means lower initial margins, and possibly lower out of the box clock speeds, but they do get the performance crown (at least before the 1080Ti is released). They can however release it early, then release a 'GHz' edition after Ti is out to compete with it as needed.FMinus - Tuesday, May 17, 2016 - link
Well, there's been rumors, and support for the Vega 10 chip has been added to Aida64 two weeks ago. So maybe October might not be so wrong as the rumors say.Murloc - Thursday, May 19, 2016 - link
I think people are always overoptimistic on AMD.medi03 - Tuesday, May 17, 2016 - link
Nothing jaw dropping in 20-30% increase, especially considering 28nm => 16nm jump in the process.Azune - Tuesday, May 17, 2016 - link
You are comparing apples to oranges. This is the smaller of the pascal chips, so it should be compared to a 980. And thats a 50-80% difference.beginner99 - Tuesday, May 17, 2016 - link
I compare on performance/$ and in that regard it plain sucks. 1070 will be way better in that regard.BurntMyBacon - Tuesday, May 17, 2016 - link
I'm also more interested in where the 1070 falls into place at this point. nVidia is milking the early adopters tax a little too hard for my tastes with the 1080. Hopefully, the 1070 performance will largely negate the this to some extent.ChefJeff789 - Tuesday, May 17, 2016 - link
We're also looking at stock speeds. By all accounts, the custom 1080s will probably be able to push at least another 20% performance with overclocking. Hopefully much more with additional power connectors (I'm hoping for a 2.5 GHz overclock)TheinsanegamerN - Tuesday, May 17, 2016 - link
By virtue of having a much better cooler, the 3rd party models should clock higher. 80c seems so high. (my OCed 770 at 1.25GHz hits 62c tops with a PNY XLR8 three fan cooler)haukionkannel - Wednesday, May 18, 2016 - link
Not likely. If you look old Intel processors with bigger production node and new Intel processors with smaller production node, the overclocking potential has not been too good. There are benefits when going to smaller production node, but high overclocking is not one of them!zoxo - Tuesday, May 17, 2016 - link
Don't forget that you should compare the GP104 chip to the GM104, so vanilla 980, as the 1080 Ti will come down the lineYojimbo - Tuesday, May 17, 2016 - link
Well the vanilla 1080 will be $50 more than the vanilla 980 so that should be taken into account.BurntMyBacon - Tuesday, May 17, 2016 - link
@zoxo: "Don't forget that you should compare the GP104 chip to the GM104, so vanilla 980, as the 1080 Ti will come down the line"Architecturally, yes. That is the comparison to make. However, from a consumer standpoint, the 1080 is positioned as the new halo product and it comes in closer to the price of the older halo product (980Ti). Thus, until such a time as a new halo product (1080Ti? / Titan?) emerges, it will be compared to the 980Ti. Don't forget, the 980 was compared with the 780Ti and there was a significant time gap before the 980Ti hit the scene. I doubt a 1080Ti will be short in coming.
Commodus - Tuesday, May 17, 2016 - link
Better to be honest and post a preview than rush out a half-hearted review.3ogdy - Tuesday, May 17, 2016 - link
Hello Disappoint! How are you today?WinterCharm - Tuesday, May 17, 2016 - link
You must not be very familiar with Anandtech. Their full review takes time, but it will melt your face with how thorough it is.damianrobertjones - Tuesday, May 17, 2016 - link
Not disappointed?tipoo - Tuesday, May 17, 2016 - link
After being on 28nm since 2011, PC hardware is finally starting to get interesting again. Pascal, Polaris, Zen, exciting times.JimmiG - Tuesday, May 17, 2016 - link
"After being on 28nm since 2011, PC hardware is finally starting to get interesting again. Pascal, Polaris, Zen, exciting times. "Couldn't agree more. Even though this card makes my 970 look pathetic, it also makes me very happy to see these performance gains in an age where it looked like the entire PC market had stagnated.
If you ignore the price premium, this kind of leap over the 980 (which is the card that the GTX 1080 technically replaces), reminds of previous grand GPU launches like the Radeon 9700, GeForce 4, GeForce 8800 etc.
tipoo - Tuesday, May 17, 2016 - link
Even the 380 dollar 1070 should be a decent leap following this, as it still has Titan X beating performance. I hope both camps drive forward the 200 dollar price point performance.medi03 - Tuesday, May 17, 2016 - link
789€ for 1080 in Germany (announced Founders Edition price was 699$)Founders Edition 1070 was supposed to be 449$. So, I guess, 480-520€ (1 Euro is more than 1$, but somehow that's the pricing we get... =/)
jasonelmore - Tuesday, May 17, 2016 - link
lol the euro has dropped in value so much its piratically on par with the dollar. Blame your goverment for VAT no currency exchange.vil2 - Tuesday, May 17, 2016 - link
No. 699 $ is 617 euros. Add roughly 20% VAT and it makes 740 €, not 799.JeffFlanagan - Tuesday, May 17, 2016 - link
>Even though this card makes my 970 look pathetic...I feel good about that. I have to disable SLI in my dual 970 system for VR, so switching to one powerful card will be a big improvement.
Agent_007 - Tuesday, May 17, 2016 - link
"GTX 1080 should be 3x faster than the GTX 980 or Radeon HD 7970"980 -> 680
Golgatha - Tuesday, May 17, 2016 - link
So the recent sale price for the GTX 980 Ti at $500 is where it should be priced considering the performance and cost of the GTX 1080 is about an equal percentage higher by both metrics. That's personally not what I'd call a good value, but when you have a monopoly at the high end of the graphics card spectrum, that's about what I'd expect in terms of pricing.strafejumper - Tuesday, May 17, 2016 - link
benchmark I was looking at (doom at 1440p @ guru3d) had 1080 21.7% faster than 980 tiNewer prices of 980 ti ($500) vs 1080 ($600) are 20% higher
seems on point
mr_yogi - Tuesday, May 17, 2016 - link
Are all the comparison cards using the drivers they were originally reviewed with or using the latestdrivers available?Ryan Smith - Tuesday, May 17, 2016 - link
Latest available. 368.13 for the NVIDIA cards and 16.5.1 for AMD.mr_yogi - Tuesday, May 17, 2016 - link
Many thanks :)just4U - Wednesday, May 18, 2016 - link
Which is likely why we see a marked improvement in some cards performances overall. I'd have expected some of those to be further behind than they actually are.Ushio01 - Tuesday, May 17, 2016 - link
Could this be more underwhelming? guess I can skip this gen and keep my 670 for another few years.Jtaylor1986 - Tuesday, May 17, 2016 - link
This card exists in the zone of being overpowered at 1080P and 1440P and still a hair too slow at 4k in some titles. I guess we will have to wait until the big die 14/16nm gpus come out before we get no compromise 4k.FMinus - Tuesday, May 17, 2016 - link
If you have a 980Ti, overclock it (it probably already is) and it's going hand to hand with the stock 1080, no need to upgrade really.Looking at the reviews right now, I doubt the 1070 will even touch the 980Ti, and if they keep the EU pricing up, highly overpriced at 460-520EUR, for what it delivers. So I'm just gonna wait for what AMD brings to the table.
TheinsanegamerN - Tuesday, May 17, 2016 - link
Or wait for 3rd party models. Techspot showed pretty good gains from OCing, so the big coolers(or liquid cooling) that can hit 2+GHz are going to be where the 1080 shines.jasonelmore - Tuesday, May 17, 2016 - link
EU pricing is high because of VAT not currency exchange.BrokenCrayons - Tuesday, May 17, 2016 - link
If you're not trying to drive 4k resolutions, there probably isn't a compelling reason to upgrade from a 670 until the generation after the 10x0. I do agree that the 1080 strikes me as "yet another GPU refresh" because the performance increase isn't significant and the power/thermal numbers are only holding steady despite the more efficient FinFET process.I'm still interested in reading the full review, but at this point Ryan's comment about AMD's possible future plans - "Rather the company is looking to make a run at the much larger mainstream market for desktops and laptops with their Polaris architecture, something that GP104 isn’t meant to address." is far more interesting to me since I'd like to see the new process node put to use in laptops and in lower end portions of the GPU market.
DanNeely - Tuesday, May 17, 2016 - link
The GP106 (presumably 1060/1050) is due out in the fall.AMD will initially have the middle of the market to itself. It'll be interesting to see how well they're able to exploit it though. Not having a true flagship until Vega launches will hurt them among the large body of ignorant consumers who look at the headline numbers for top of the line cards because they're the most visible and buy based on that; a problem that's been dogging them for the last few years as nVidia has grown its market share.
The biggest question is if the lack of a flagship at launch is due to due to the unavailability of HBM (ie Vega doesn't have GDDR5/x memory controllers at all) or a deliberate decision to go for the center of the market first; or is an indicator that GloFo is struggling on 14nm yields. The latter is alarming if true; since it would mean that despite probably being able to crush nVidia in the mid-range for the next few months limited availability would prevent them from being able to exploit their lead effectively at a time when AMD desperately needs a cash cow generating win somewhere.
medi03 - Tuesday, May 17, 2016 - link
PS4k is to be released in October this year (major French retailer leak) so 14nm should be there.Also, remember that it's actually Samsung's 14nm.
BrokenCrayons - Tuesday, May 17, 2016 - link
Yes, there's no question that there'll be some lost sales over the impression of market leadership that spills over into unrelated segments where that competition for the fastest high end GPU isn't really relevant. Buyers who don't look at the value proposition of the specific product they're purchasing relative to its price bracket are pretty commonplace when it comes to computer components. I think it might hurt AMD's bottom line, but maybe not if the mid range volume is high enough to offset those lost sales.With respect to 14nm yields, I'd think that positioning the company to tackle the middle of the GPU price/performance market would be exceptionally unwise if there were problems with yield so I'd don't think it's worth worrying much about. Lower end GPUs use less wafer and that might offer an advantage, but lower priced cards sell in larger numbers typically than the top end cards so the demand will be higher and expectations for fab output might be higher as well. I'd like to think the decision is deliberate. AMD has also exhibited a history of targeting unfilled or less well served segments in order to find a niche that generates sales when they aren't in a position to lead in performance as demonstrated by their dropping out of the high end CPU market. That might be a bad strategy though since it hasn't done them any favors at retaining CPU market share and it does look like they're following a similar course with graphics.
I'm not sure what to think really, but I will be keeping an eye out for AMD's upcoming graphics products as they're released since they may offer more value for the dollar. I don't really need a lot of GPU since I keep resolutions low and use Steam's streaming exclusively now, but I would like to upgrade out of the GT 730 with a 16/14nm card that offers a little more of everything, but stays in a reasonable power budget.
Lolimaster - Tuesday, May 17, 2016 - link
Considering that the gpu in the current gen A10 APU's destroys your GT730. the 50w Polaris 11 will probably deliver around GTX960/370 performance.doggface - Tuesday, May 17, 2016 - link
The problem with low end amd cards atm is they lack features. Give us a $150-200 card with 4k 10bit h/w 265 decode, hdmi2, dp 1.4, etc and moderate gaming performance and it will sell. Give us great performance/cost and shitty features. Watch it sit on the shelf.Michael Bay - Wednesday, May 18, 2016 - link
Everything destroys GT730, extrapolating anything out of such comparisons is a wishful thinking at best.BrokenCrayons - Wednesday, May 18, 2016 - link
The 730 was a cheap upgrade I did to get a hotter running and far older GeForce 8800 GTS out of my system last year to take some load off the power supply (only 375 watts) so I could upgrade the CPU from a tired Xeon 3065 to a Q6600 without pushing too hard on the PSU. The only feature I really did bother with making sure I got was GDDR5 so the chip wasn't hamstrung by64-bit DDR3's bandwidth issues. The A10's iGPU would indeed make it look underpowered, but I'm not in the market for integrated graphics for my desktop. However, it's long overdue for a rebuild for which I'm gathering parts now. I would have considered an A10, but instead I just picked up an Athlon x4 and will carry the 730 forward onto the new motherboard for a little while until 16/14nm makes its way down the product stack into lower end cards. Since I plan to eventually purchase whatever new generation hardware is out on the smaller process node anyway, a CPU with an iGPU that ultimately ends up being unused doesn't make a lot of sense. In the short term future 730 should be fine for anything I do anyway since I have no reason to push higher resolutions or use any sort of spatial anti-aliasing. All of that doesn't really matter once the game's video and audio are rolled up in an h.264 stream and pushed across my network from my gaming box to my netbook where I ultimately end up playing any games on a low resolution screen anyway. I think something around a GTX950's performance would be perfectly fine for anything I need to do so I'm content to wait until I can get that performance for around $100 or less. Spending my fun money on a computer is a very low priority and I can always wait until later to get newer/faster hardware if I game I'm interested in playing doesn't run on my current PC. Such is the case with Fallout 4, but I won't bother with it until all of its DRM is out, there are patches that address most of its issues, and it's got a GOTY edition on discount through Steam for $20. By then, whatever I'm running will be more than fast enough to offer an enjoyable gaming experience without me struggling and grubbing around to find high end gear for it or divert money from seeing films, traveling, or dining out. I also don't have to bother overclocking, buying aftermarket cooling solutions, managing cables to optimize airflow, or any of that other garbage I used to deal with years ago...I don't know how many hours I spent playing IDE cable origami so those big ribbons wouldn't impede a case fan's air current over a heatsink so I could eek out one or two meaninglessly fewer degrees C on an unimportant component. Now, screw it, I put crap together once and forget about it for a few years, enjoying the fun it provides on the way because I finally figured out that the parts are just a means to obtain a few hours a week of recreation and not the ends themselves.paulemannsen - Thursday, May 19, 2016 - link
Man you really try thinking too hard just for Angry Birds.BrokenCrayons - Thursday, May 19, 2016 - link
I understand that the idea of someone playing casual games while also keeping tabs on computer hardware is somehow a really threatening concept, but don't let it cloud your thoughts too much that you assume that Angry Birds and Fallout 4 are mutually exclusive. You can be smarter and better than that if you try.lashek37 - Friday, May 20, 2016 - link
As soon as AMD comes out with their card, Nvidia will unleash the GTX1080 T.I ,lol.😂Yojimbo - Tuesday, May 17, 2016 - link
They won't have the middle of the market completely to themselves. They'll have the only new cards in the segment for 2 or 3 months. But during that time those cards will be competing with the 980 and 970. AMD, on the other hand, probably can't make much money selling Fury cards priced to compete with the 1070, and they'll have virtually nothing competing with the 1080, and that situation will last for 6 or more months. That's the reason AMD will be hurt, not because of "ignorant customers", as you claim.Yojimbo - Tuesday, May 17, 2016 - link
As an aside, if consumers were ignorant to choose new Maxwell cards over older AMD cards competing against them, why will they not similarly be ignorant to choose new Polaris cards over the older Maxwell cards competing with them?etre - Tuesday, May 24, 2016 - link
I fail to see how chosing old tech over new tech for a price difference of few euros is the smart thing to do.Everyone wants new tech, it's a psychological and practical factor.
As an example, where I'm living, in winter we can have -10 or -20 C but in summer is not uncommon to exceed 40C. For me power consumption is a factor. Less heat, less noise. The GTX line is well worth the money.
cheshirster - Tuesday, May 17, 2016 - link
Last time they went middle first was a big success (4870 vs GTX265).Don't see a problem for them if their P10 can touch 1070 perf for <400$
Yojimbo - Tuesday, May 17, 2016 - link
From the rumors, I doubt the P10 is going to touch 1070 performancedragonsqrrl - Wednesday, May 18, 2016 - link
The 4000 series didn't start at the middle. The 4870 was the high-end card in the stack.The highest-end Polaris 10 based card is rumored to perform similarly to the 390X. So quite a bit below early estimates for the 1070.
Lolimaster - Tuesday, May 17, 2016 - link
AMD and Nvidia knew that HBM2 will not be ready for mass production until Q4 2016 early 2017.Yojimbo - Tuesday, May 17, 2016 - link
Isn't it over 4 times faster than a 670? If the 670 still works for you will something being 10 times faster make a difference? Are you looking to jump up from 1080P with low qualities settings to 4K with high quality settings?bananaforscale - Wednesday, May 18, 2016 - link
Funny, I'm thinking of replacing my 770, but with a 1070, because lower TDP, faster memory and more of it and more eye candy. Granted, I may find I need to upgrade the CPU too but that's life, and it's approaching five years old anyway...rtho782 - Tuesday, May 17, 2016 - link
I wish there were 1440p benchmarks, with the advent of monitors like the RoG Swift this is the most common "step up" from 1080p these days.Eden-K121D - Tuesday, May 17, 2016 - link
They did well considering how nvidia positioned it as a single card solution for 4k.ZeDestructor - Tuesday, May 17, 2016 - link
You may find bench to be of relevance to you then: http://www.anandtech.com/bench/product/1714?vs=171...Eden-K121D - Tuesday, May 17, 2016 - link
I have a question are second hand 350-400$ 980Ti cards a good buyRussianSensation - Tuesday, May 17, 2016 - link
Wait for GTX1070 review to decide. I would estimate that if you have the patience to wait for after-market $380-400 GTX1070 cards, there will be a better buy than a used 980Ti.Eden-K121D - Tuesday, May 17, 2016 - link
Thanks.Meaker10 - Tuesday, May 17, 2016 - link
The founders card, so premium we don't even populate half the power delivery chips.BillyONeal - Tuesday, May 17, 2016 - link
They build these reference designs for the Gx100 chip; the Gx104 chip uses much less power and doesn't need as much power delivery hardware.Shadowmaster625 - Tuesday, May 17, 2016 - link
"Translating this into numbers, at 4K we’re looking at 30% performance gain versus the GTX 980 and a 70% performance gain over the GTX 980, so we’re looking at "at typo!
Ryan Smith - Tuesday, May 17, 2016 - link
There are no typos, just happy little accidents.FMinus - Tuesday, May 17, 2016 - link
oh you Bob...Badelhas - Thursday, May 19, 2016 - link
Ryan, when you you publish the Full Review of the HTC 10?Cheers
thetuna - Tuesday, May 17, 2016 - link
Nvidia starting their graphs at 50%... I guess it's expected by now.QinX - Tuesday, May 17, 2016 - link
Can someone explain how the GTX980 went from 29.1 FPS in Crysis 3 4K Very High Quality @ launch to 21.1 FPS in this review?If it's just drivers that wth happened 33% less performance
Ryan Smith - Tuesday, May 17, 2016 - link
http://images.anandtech.com/graphs/graph8526/67721...We only started using C3's Very High Quality this year, now that cards have caught up with the game. The GTX 980 review was with High Quality.
QinX - Tuesday, May 17, 2016 - link
Thanks for the explanation, I was worried that support for older games was already going down.Badelhas - Tuesday, May 17, 2016 - link
What about including tht HTC Vive on your benchmarks? If you talk about the VR benefits, you have to show them in graphs, it´s you speciality AnadTech! ;)JeffFlanagan - Tuesday, May 17, 2016 - link
Seconded. At this point VR gaming is much more interesting to me than even 4K gaming, and will drive my video card upgrades from now on. It's really nice to be able to play a game like it's the real world, rather than using a controller and looking at a screen.MFK - Tuesday, May 17, 2016 - link
Completely agreed.I'm a casual gamer, and my i5-2500k + GTX760 serve me perfectly fine.
I have a 1440p monitor but I reduce the resolution to 1080 or 720 demanding on how demanding the game is.
My upgrade will be determined and driven by VR. Whoever manages to deliver acceptable VR performance in a reasonable price will get my $.
And they will be competing in price and content against the PS4k + Move + Morpheus combo.
Ryan Smith - Tuesday, May 17, 2016 - link
It's in the works, though there's an issue with how many games can be properly tested in VR mode without a headset attached.haplo602 - Tuesday, May 17, 2016 - link
It will be interesting how much GDDR5X affects the scores vs GDDR5. 1080 vs 1070 will be very telling or in the alternative a downclocked 1080 vs a 980 Ti ....fanofanand - Tuesday, May 17, 2016 - link
excellent preview, little typo here.Translating this into numbers, at 4K we’re looking at 30% performance gain versus the GTX 980 and a 70% performance gain over the GTX 980, amounting to a very significant jump in efficiency and performance over the Maxwell generation. That durn GTX 980 is just all over the board!
tipoo - Tuesday, May 17, 2016 - link
How does Pascal do on async compute? I know that was the big bugbear with Maxwell, with Nvidia promising it but it looking like they were doing it in CPU for scheduling, not GPU like GCN.http://www.extremetech.com/extreme/213519-asynchro...
https://forum.beyond3d.com/threads/dx12-performanc...
Stuka87 - Tuesday, May 17, 2016 - link
I do find it a bit annoying that you guys are still using a junk reference 290X instead of a properly cooled 390X.TheinsanegamerN - Tuesday, May 17, 2016 - link
That's what AMD provided. A custom cooled nvidia 980ti will perform better then the stock model, yet people dont complain about that.When anand DID use a third party card (460s IIRC) there was a massive backlash from the community saying they were 'unfair' in their reviews. So now they just use stock cards. Blame AMD for dropping the ball on that one.
Ryan Smith - Tuesday, May 17, 2016 - link
It's an all flagship lineup. 290X was AMD's flagship from 2013 to 2015. 390X was never their flagship.Eden-K121D - Wednesday, May 18, 2016 - link
How Come?just4U - Wednesday, May 18, 2016 - link
Fury came along..Guspaz - Tuesday, May 17, 2016 - link
I'm hoping the full article compares it to cards outside the same class, like the 970. It's hard to judge upgrade effectiveness if cards are only ever compared against other top-end cards.Beararam - Tuesday, May 17, 2016 - link
"I also wanted to quickly throw in a 1080p chart, both for the interest of comparing the GTX 1080 to the first-generation 28nm cards, and for gamers who are playing on high refresh rate 1080p monitors."THANK YOU, high refresh seems to be swept under the carpet for 3.8k displays. The old 780 I have is trying it's best to remain relevant, but can't seem to make a 144hz panel worth it's while.
brucek2 - Tuesday, May 17, 2016 - link
+1Lonyo - Tuesday, May 17, 2016 - link
Finally time to consider upgrading once the AMD cards are out to see which is best value and prices have stabilised.wishgranter - Tuesday, May 17, 2016 - link
Hi Ryanplease test it HARD in the COMPUTE part, as gaming benchmarks are everywhere now.
for us using it in COMPUTE is more interesting... :D
dragonsqrrl - Tuesday, May 17, 2016 - link
Yep, haven't seen any compute results yet in other reviews. Likely unparalleled single and half precision performance, and mediocre double precision.lefty2 - Tuesday, May 17, 2016 - link
Why did they compare DX11 and DX12 results in Hitman, but not ashes of singularity? Anyone else find that odd?Ryan Smith - Tuesday, May 17, 2016 - link
Ashes is a game that I only intend to run in DX12. For all intents and purposes it's the marquee DX12 title, and I expect hardware vendors to be able to handle it well. Especially as its engine was more or less designed for low level APIs from the start.Hitman, on the other hand, had its DX12 implementation essentially bolted on after the fact.
Achaios - Tuesday, May 17, 2016 - link
No reason for anyone playing at 1920X1080 to buy this card and still not quite good at 4K either, meaning that it falls short of the 60 FPS mark @ 4K.Will wait for 1080TI.
Dritman - Tuesday, May 17, 2016 - link
More half assed content from Anandtech. I'm not even surprised anymore. Can't wait to hear more excuses from Ian, thats what the audience really want right Ian? Keep coming back hoping you guys will get your shit together, but I think I'm ready to say good bye.Every single other outlet on the net with a 1080 review has achieved more than Anandtech, how do you think that reflects on you?
silverblue - Tuesday, May 17, 2016 - link
Judge them once the review is out.Ryan Smith - Tuesday, May 17, 2016 - link
I'm always sorry to lose a reader.But I also don't make any apologies for how I've chosen to publish this. I had 4 days to work on this, and that's not sufficient time for a full AnandTech quality review.
vladx - Tuesday, May 17, 2016 - link
Don't sweat it Ryan, I want an in-depth look into Pascal architecture and I really want to see how Pascal IPC compares to Maxwell's, my bet is it's about 10-15% lower overall.vladx - Tuesday, May 17, 2016 - link
Bye Anandtech is better without the likes of you with comments like that.brucek2 - Tuesday, May 17, 2016 - link
If AnandTech was not included among certain sites hand picked to receive early review samples, that may actually reflect quite well on their editorial integrity.Also, really not feeling the time urgency you seem too. It's not yet even possible to order the card, and a lot of related information that some would consider important -- ie 3rd party cards and their performance -- isn't anywhere close to being on the table either.
Ryan Smith - Wednesday, May 18, 2016 - link
"If AnandTech was not included among certain sites hand picked to receive early review samples, that may actually reflect quite well on their editorial integrity."To be clear, we received our sample at the same time as everyone else. The issue was that I had another (previously scheduled) function to attend when those samples were distributed. No malice or anyone's part, just bad timing all around.
Michael Bay - Wednesday, May 18, 2016 - link
You`re literally attentionwhoring.Nobody will miss you.
Beararam - Tuesday, May 17, 2016 - link
Also, chart shows the 780 having 256 bus width. Is definitely 384.FMinus - Tuesday, May 17, 2016 - link
Can we get a table comparing OCed GTX 980Ti to a stock and OCed GTX 1080 in the final review?strafejumper - Tuesday, May 17, 2016 - link
1 more request - later on update with the new game Overwatch - this comes out May 24yhselp - Tuesday, May 17, 2016 - link
", and in time-honored fashion NVIDIA is starting at the high-end."Come on, at least acknowledge the fact NVIDIA are actually releasing a video card based on medium-sized GPU - the GTX 1080 - and marketing it as a flagship with a price to match, even more.
No need to comment on the fact; no need to criticize NVIDIA for seeking huge margins in the consumer sector ever since Kepler, delaying the real high-end GPU, or any such thing. Just let your readers know the GTX 1080 is based on a mid-sized GPU which is not the GP100 flagship to come from the get go.
A true time-honored fashion for NVIDIA would be releasing a new architecture with the biggest GPU and charging $500 from day one. Something that last happened with Fermi.
nevcairiel - Tuesday, May 17, 2016 - link
It beats every other GPU on the market, if thats not high-end... High-end is a moving target, its whatever is the fastest at the time of writing.Certainly, it could be faster - it always can be. But GP100 just doesn't have the availability yet. They could wait longer and then launch your true "high-end" first, but instead we get new toys sooner, which is always a good thing.
yhselp - Wednesday, May 18, 2016 - link
On the contrary, nowadays we get more performance late, and we pay double for it. We used to get the large-sized GPU first, with a new architecture - just like the GTX 480 - ever since Kepler, however, we've had to wait a year after release. In the meantime, NVIDIA have been charging flagship money for the medium-sized GPU - like the GTX 460 - and releasing the vulgar, super-high margin Titan somewhere in-between. Essentially, by the time the 780 Ti, the 980 Ti, the Titans and even the very cut-down 780 came out, they were already outdated products as far as technology goes, but still carried a premium price tag. Why is that so hard to understand?As far as performance goes - of course a new architecture on a new node will be significantly faster, there's nothing amazing about that. That doesn't mean a video card based on a mid-sized GPU should be marketed as a flagship, as the best thing since sliced bread, and carry such gruesome price premium - $700 for "irresponsible performance" - give me a break! - the only irresponsible thing is blind consumers eating this up. That's why we need competition.
Keep making excuses for big companies, and see how they keep increasing pricing, delaying products, cutting features, and doing whatever the hell they want. Guess who gets screwed as a result of this - that would be you, and me, and every other consumer out there. So keep at it.
yhselp - Wednesday, May 18, 2016 - link
Just to clarify a bit more: going into Kepler NVIDIA were quite nervous about how consumers would react to all this, and although journalists, including Anandtech, noted that the GTX 680 was not a direct successor to the GTX 580, but rather the new GTX 560 Ti, and as such was essentially twice as expensive, it didn't seem to bother consumers perhaps because, as you say - it's so new and fast. Whether it's really because consumers are misinformed, don't care, or a combination of both is irrelevant - it's now history. NVIDIA managed to get away with it. It has been that way ever since. And now, with Pascal, they're looking to expand on it all and charge even higher - up to $150 extra as noted at the end of this article. They might be looking to establish a great premium for overclocking capabilities as well. A sort of Intel K-series, but on top of a product that is already very expensive.The Titan-class cards are just the other side of this story. After a successful GTX 680 launch, NVIDIA decided to try and do the same with the large-sized Kepler GPU. On top of delaying the flagship product - the GK110 - they decided to, again, charge essentially double. And thus the original Titan was born. They were so nervous about it that they decided to enable serious compute performance on it so that if it fails in the consumer sector, it'd sell in the compute world. It outlived their wildest dreams - apparently, people were not only willing to throw money at them, but didn't know any better either. And so we put the writing on the wall, and we've been reaping the "benefits" ever since. It looks like we'll do the same again.
lashek37 - Tuesday, May 17, 2016 - link
I'm selling my 980T.i and buying this beast.anybody on board?😂😉Lolimaster - Tuesday, May 17, 2016 - link
If you got a ti, theres no reason to "upgrade" to this card.Wait Vega or the 1080ti.
Iamthebst87 - Tuesday, May 17, 2016 - link
If you do I'd wait till AIB card become available. The reference 1080 OCs like crap compared to Maxwell. The 980ti reference OC got about 20%-25% performance gain on OC, the 1080 gets about 10%-12%. If you have an AIB 980ti you might even be getting more on the OC. So to sum it up an AIB OC 980ti is only slightly slower 15%-20% than a OC 1080.lashek37 - Wednesday, May 18, 2016 - link
I have a Evga 980T.I from Amazonwumpus - Tuesday, May 17, 2016 - link
Looks like I get to eat my words about posting "doom and gloom" about a Friday 6pm press event. They didn't have any real "bad news" (although the reason for refusal to demonstrate 'ray traced sound' was clearly a lie. You can simply play the sounds of being in various places to an audience as easily as in a movie as in VR). I wouldn't call it terribly great news either, just the slow and steady progression of a company without competition.Looks like it competes well enough against the existing base of nvidia cards. It also appears that they don't feel a need to bother worrying about "competition" from AMD:( (Note that Intel appears to spend at least as many mm and/or transistors on GPU space as this beast. What they don't spend is power (Watts) and bandwidth. The difference is obvious (and I can't see them trying to increase either on their CPUs).
One thing that keeps popping up in these reviews is the 250W power limit. This just screams for someone to take a (non-founders' edition) reference card and slap a closed watercooling system on it. The results might not be as extreme as the 390, but it should be up there. I suspect the same is true (and possibly moreso unless deliberately crippled) on the 1070.
rhysiam - Tuesday, May 17, 2016 - link
"Note that Intel appears to spend at least as many mm and/or transistors on GPU space as this beast"I don't think that's accurate at all. To my knowledge Intel haven't released specific die size or Transistor counts since Haswell. But the entire CPU package of a 4770K is ~1.4B transistors (~one fifth of a GP204 GPU). Anandtech estimated ~33% of the die area (roughly 500M transistors) was dedicated to the 20EU GT2 GPU. Obviously the GT2 is hardly Intel's biggest graphics package, but even a larger one like the 48EU GT3e package from the Broadwell i7-5775C must surely still have significantly fewer transistors than a GP204.
rhysiam - Tuesday, May 17, 2016 - link
I mean GP104 of course.bill44 - Tuesday, May 17, 2016 - link
When you do a full review, could you spear a thought to some of us, who are not into gaming.I would like to know about audio side (sample rates supported etc.) as an example, and a proper full test for using it with madVR (yes, we know it supports the usual frame rates etc.).
Some insights into 10/12bit support on Windows 10 (not just for games & madVR DX11 FSE) inc. generic programs like Photoshop/Premiere etc.
On a side note: if you're not into gaming, but prefer 4K@60p dual screen setup with 10bit colour, which GPU is best?
bill44 - Tuesday, May 17, 2016 - link
forgot to add.Tomshardware does not mention any of this.
http://www.tomshardware.co.uk/nvidia-geforce-gtx-1...
vladx - Tuesday, May 17, 2016 - link
Why would you want a beast like GTX 1080 for work in Photoshop and rest of Adobe's suite? It'd be just a big waste of money.bill44 - Tuesday, May 17, 2016 - link
Architectural changes.By the end of the year, there will be some 4K HDR monitors. Maybe even 120p. If I want to edit in Premiere with dual 4K HDR 120p screens, or I prefer a 5k screen over a single cable connection, what are my GPU choices? DP 1.3?
I also mentioned 10bit support (not Quattro) and madVR. It's not this card (specifically) I'm interested in, but the architecture. There will be cheaper cards in the future for sure, however, they will use the same tech as here. Hence my curiosity.
dragonsqrrl - Tuesday, May 17, 2016 - link
The performance can be very useful in Premiere and After Effects for both viewport rendering and export.Ryan Smith - Wednesday, May 18, 2016 - link
"Some insights into 10/12bit support on Windows 10 (not just for games & madVR DX11 FSE) inc. generic programs like Photoshop/Premiere etc."You're still going to want a Quadro for pro work. NVIDIA is going to allow 10bpc support in full screen OpenGL applications, but not windowed applications.
bill44 - Wednesday, May 18, 2016 - link
That's a bummer.Currently, I have 3x screens connected. 2x desktop monitors and 1x for HTPC trough the amp.
If I wanted full hardware HEVC 10bit decoding, DP1.3/1.4 for 2x 4K or 5K HDR monitor over 1x cable, I need to give up 10bpc support for windowed apps. Or, go with something like the quadro M2000 with non of the latest goodies (DP 1.2, HDMI 2.0b, full HW decode HEVC 10bit, HDR etc. etc.).
It will be quite a while before any new quadro support them. Regardless of price.
Ryan Smith - Wednesday, May 18, 2016 - link
To be clear, you get 10bpc support for windowed D3D applications, so your HTPC idea will work.The distinction is for professional applications such as Photoshop. NVIDIA has artificially restricted 10bpc color support for those applications in order to make it a Quadro feature, and that doesn't change for GTX 1080.
sagman12 - Tuesday, May 17, 2016 - link
"Gamers however won’t be able to get their hands on the card until the 27th – next Friday – with pre-order sales starting this Friday." I hope this is true. I don't want to have to stay up all day hitting F5 until i secure my 1080FER3MF - Tuesday, May 17, 2016 - link
Looking at an unopened Gigabyte R9 390X G1 that I picked up for £250 (standard price for an R9 390X is £330-£360 in UK money).
This is getting 50-66 percent of the framerate of the GTX 1080, but for slightly less than half the price ($599 for the non-Founders edition translates to roughly £500 inc VAT).
Knowing what we know no about likely performance of upcoming 14/16nm products, should i be sending it back?
cheshirster - Tuesday, May 17, 2016 - link
Perf/$ would not change drastically.But perf/watt will skyrocket. You probably will be able to get the same perf in half the power.
R3MF - Tuesday, May 17, 2016 - link
cheersMarucins - Tuesday, May 17, 2016 - link
Where is COMPUTING tests?3ogdy - Tuesday, May 17, 2016 - link
Ryan, please consider integrating this in your upcoming review of the 1080. It would be extremely useful:Clock the 1080 just like the 980 and then compare their performance. I would like to see how much of that 15-FPS-on-avg increase vs 980Ti comes from clock speed increase and how much of an impact does Pascal actually have. As it looks right now, the 1080 is a disappointement - I was expecting something truly stellar from nVidia after touting this and that and making serious all-around changes , taking advantage of a process node half as big as the previous one...so far 1080 is shaping to be just an incremental upgrade if not even a sidegrade when clock speed differences are negated. I hope I'm as wrong as one could be, though. Good preview so far!
tarqsharq - Tuesday, May 17, 2016 - link
Yes, this would be very interesting!genekellyjr - Tuesday, May 17, 2016 - link
Doing some quick calcs w/ BF4 FPS numbers gives 1080 - 111 FPS/MHz/core, 980Ti - 150 FPS/MHz/core, 980 - 75 FPS/MHz/core for 4K. The 1440p and 1080p numbers also follow suit (150/206/102 for 1440p, 231/319/159 for 1080p).Essentially, doing meaningless number crunching does show that normalized the 980Ti is better per MHz per core, at least for BF4. I used the boost clock numbers for the MHz. Hope it is investigated because it seems like Nvidia spent extra transistor bank on other aspects significantly (maybe F16 compute?) to the detriment of their F32 gaming chops.
Ryan Smith - Tuesday, May 17, 2016 - link
I can already tell you right now that at an architectural level, per-clock per-core performance between GP104 and GM204 is virtually identical. Throughput of the ALUs, texture units, ROPs, etc has not changed. What makes GP104 faster than GM204 is the larger number of SMs, the higher clockspeeds, and a memory subsystem fast enough to feed the former.(Which is not to discount the Pascal architecture. 16nm FinFET alone won't let you ramp up the clockspeeds like this. NVIDIA had to specifically engineer the architecture to hit those clockspeeds without driving up the power consumption)
modeless - Tuesday, May 17, 2016 - link
There's only one thing I want to know about this card. Does it support the special instructions that the Tesla P100 has for half precision float (FP16), which double throughput? This is very important for deep learning and nobody has confirmed yet.vladx - Tuesday, May 17, 2016 - link
It most surely doesn't support those like P100 as that is the whole point of selling Tesla at such high price.dragonsqrrl - Tuesday, May 17, 2016 - link
So has that been confirmed by Nvidia or a review, or is that your assumption?vladx - Tuesday, May 17, 2016 - link
I'm 100% sure since GTX 1080 which is based on GP104 has a different architecture than P100 and almost identical to Maxwell.Yojimbo - Wednesday, May 18, 2016 - link
I'm 100% sure you're wrong, because the GP106, or something like it, will be used in the Drive PX 2 and will have double throughput half precision support since it's going to be used as a machine learning inference engine. If the GTX 1080 doesn't support double throughput half-precision support it's probably because they purposefully disabled it to prevent the cards from being used in high quantities for compute workloads. They will probably, at some point, come out with a Tesla based on the GP104 and/or the GP106 that does support double throughput half-precision compute, to replace the M40 and M4 cards. Pascal does everything better than Maxwell so it would be starving a growing industry to leave that segment to Maxwell for too long.vladx - Wednesday, May 18, 2016 - link
Huh? We know next to nothing about GP106 architecture unlike with GP100 and GP104 chips which like I said have different architectures with GP104 (GTX 1080) being almost identical to Maxwell on a hardware level.vladx - Wednesday, May 18, 2016 - link
Anyways, see Ryan Smith's answer below. I was 100% correct.dragonsqrrl - Wednesday, May 18, 2016 - link
Not necessarily. In fact I would be very surprised if GP104 doesn't support the double FP16 throughput. Like yojimbo said, the more likely scenario is that half precision performance is capped in some way on GeForce cards (likely through firmware).vladx - Thursday, May 19, 2016 - link
When we're talking about an architecture, we're speaking about an instrustion set. Since GP104 lacks certain compute instructions compared to GP100 in P100 then we can accurately say they have different architectures. Yes they are both Pascal and they have the same featurees on software-level but hardware-wise they are different. Doesn't matter how Nvidia enforces those differences, what matters is that they're different at an architecture-level(instruction set).Yojimbo - Thursday, May 19, 2016 - link
Why is everything 100% with you? Neither of us know 100% anything about this issue. And the fact that half precision at double throughput is not possible on the GTX 1080 does not mean that it's not possible on the GP104.Further explanation of what you said "huh?" to: NVIDIA revealed the Drive PX 2 at both CES 2016 and GTC 2016. It has two Pascal-based Tegra chips and two larger Pascal GPUs. The main purpose of the Drive PX 2 will be to run inference algorithms for self driving cars. There are large portions of these algorithms which only require FP16 precision. NVIDIA would be leaving performance on the table if they didn't include the FP16 throughput enhancements in whatever chips they are using for the Drive PX 2. And those GPUs are definitely not GP100s. Unless they specially designed another GPU that is based on the GP100, but much smaller, they are probably using something along the lines of a GP106 or GP107 for that purpose.
I'm guessing it's easier to design 6 GPUs and put FP16 enhancements in all of them then it is to design 8 GPUs and put FP16 enhancements in 4 of them. I don't think you have any reason to believe it's so difficult for them to put the FP16 enhancements into GP104. (They had already done so for the Maxwell-based Tegra X1, by the way.) You just seem to want to believe things which fit into your preferred narrative of "GTX 1080 is almost identical to Maxwell".
dragonsqrrl - Wednesday, May 18, 2016 - link
@vladxThey're all based on the same underlying architecture (Pascal). I'm actually not sure why you think GP104 is closer to Maxwell architecturally than GP100. Are you referring to the SMM layout?
Ryan Smith - Wednesday, May 18, 2016 - link
"Does it support the special instructions that the Tesla P100 has for half precision float (FP16), which double throughput?"The answer is basically no. More info to come in the full review.
modeless - Thursday, May 19, 2016 - link
:( Thanks. Hope NVIDIA gets some competition in deep learning soon...Yojimbo - Thursday, May 19, 2016 - link
They have competition already with Xeon Phi and CPUs. The trouble with AMD's GPUs for deep learning is that they don't have nearly the same level of library support as NVIDIA's GPUs do. Intel is also hoping to adapt FPGAs for deep learning purposes, I think, but I doubt that's going to help you out much.damianrobertjones - Tuesday, May 17, 2016 - link
Each new gen sees around an extra 10/14fps being added to the top card over the previous gen. No. No thank you. These companies keep DRIP FEEDING us small advances and, obviously, this is business.Spend your cash, fine, but they're laughing at us each time. (I have an ebay 980)
FMinus - Tuesday, May 17, 2016 - link
Though the move was from Maxwell to Pascal, looks more like Paxwell, Maxwell on steroids - 70% clock, 30% compression, not much innovation. And that PCB is a disgrace, skimping on the 6th phase, and only one mosfet per VRM phase - weren't they speaking of premium components thus the added premium, certainly doesn't look premium.leoneo.x64 - Tuesday, May 17, 2016 - link
Ryan. Please excuse me for asking. I am not being rude. But where is part 2 of the Galaxy s7 edge review?leoneo.x64 - Tuesday, May 17, 2016 - link
Ryan. Please excuse me for asking. I am not being rude. But where is part 2 of the Galaxy s7 edge review?Lolimaster - Tuesday, May 17, 2016 - link
Fail gen for nvidia.They need 1.7Ghz to actually show improvement vs the 1-1.2Ghz of the previous AMD/Nvidia gpu's. Imagine the GP104 at 1.2Ghz.
Wheres the efficiency?
Polaris 10 is aiming at the same 1Ghz sweet spot, improving the hell out of it's gpu cores.
nevcairiel - Tuesday, May 17, 2016 - link
Perf/Watt is higher than any previous generation (ie. efficiency), why does it matter how it gets there?Eden-K121D - Wednesday, May 18, 2016 - link
Is the architecture on decline more specifically IPCBoAdk - Tuesday, May 17, 2016 - link
GTX 780 has a 384bit memory bandwith, not 256bit :)Ryan Smith - Tuesday, May 17, 2016 - link
Thanks.chrone - Tuesday, May 17, 2016 - link
@Ryan Smith and @Anandtech, thanks for including The Witcher 3 benchmark! :DRyan Smith - Tuesday, May 17, 2016 - link
Credit goes to Dan on this one. He took the time to go through it and find a performance-intensive section we could reliably benchmark, which is always a challenge with RPGs.chrone - Wednesday, May 18, 2016 - link
Awesome! Thanks @Dan. :)none12345 - Tuesday, May 17, 2016 - link
So 25-30% faster then a 980ti, few of those were less, few were more. Lets just call it 30%. And it costs about $100 more then a 980ti, so call that 15%.So just call it 15% better value over a 980ti. Thats not very impressive for 28->16nm.
The 1070 will be more interesting to see numbers on. As will AMD's response.
vladx - Tuesday, May 17, 2016 - link
More like 30-40% in most games, only in a few it goes below 30%. Pretty good and a bit better than early expectations.FMinus - Thursday, May 19, 2016 - link
Who runs a stock 980Ti?vladx - Thursday, May 19, 2016 - link
As there are overclocked 980TI so there will be aftermarket 1080 OC. That's why you compare reference-to-reference and OC-to-OC.vacavalier - Monday, May 23, 2016 - link
I'm keeping my 980 Ti (s) for the moment. Waiting to see what the Red team "delivers" this round and what the 1080 Ti (if released) promises.Luke212 - Tuesday, May 17, 2016 - link
Does the 1080P do twice the FP16 performance like the Pascal? Please test SGEMM on FP16 and FP32.Luke212 - Tuesday, May 17, 2016 - link
I mean 1080GTX not 1080PRyan Smith - Tuesday, May 17, 2016 - link
Unfortunately I don't have a suitable SGEMM binary. But if you know where I can get something, I'd be happy to take a look.Acarney - Tuesday, May 17, 2016 - link
I'm really curious what the 1070's TDP will be like and if it'll still be like 70+ fps with maxed quality settings at 1080p... Might be the perfect sweet spot for a HTPC that you also want to basically act as a console game system :)dragonsqrrl - Wednesday, May 18, 2016 - link
150W according to leaked specsshabby - Tuesday, May 17, 2016 - link
What happened to the 67c temps at 2.1ghz? Seems to be running a bit hotter at stock clocks...Ryan Smith - Tuesday, May 17, 2016 - link
My assumption has been that was with the fan at 100% in order to achieve maximum cooling and assure the maximum overclock.shabby - Tuesday, May 17, 2016 - link
I hope in the full review you will try that? :)HollyDOL - Wednesday, May 18, 2016 - link
So, to summarize... it's more powerful than I expected and less powerful than I hoped for.HollyDOL - Wednesday, May 18, 2016 - link
Oh btw Ryan thx for early data, helped me alot.chrisdent - Wednesday, May 18, 2016 - link
Benchmark comparisons?When putting a new card up against the older cards, are the numbers for the older cards from the latest drivers available or year old numbers that may have change with driver updates?
nevcairiel - Wednesday, May 18, 2016 - link
That was answered elsewhere in the comments, its all on the newest drivers.WereCatf - Wednesday, May 18, 2016 - link
I am much more interested in knowing if Pascal brings any improvements to NVENC, something that I have yet to see discussed anywhere.dragonsqrrl - Wednesday, May 18, 2016 - link
Yes, quite a few:http://www.tomshardware.com/reviews/nvidia-geforce...
It now supports full fixed function HEVC 10/12 bit encode/decode, up to 2x 4K 60Hz, 1x 4K 120Hz, or 1x 8K 30Hz, at up to 320 Mbps.
QuinQuix - Wednesday, May 18, 2016 - link
I think the author may have missed that what matters in node shrinks is the relative node size, not the absolute difference of the shrink.Meaning it's not surprising that 28nm to 16nm is a larger performance increase than going from 58 to 40. Comparatively speaking, the transistors shrunk much more in the latter.
Yojimbo - Wednesday, May 18, 2016 - link
That's true, except these days you can't go purely by those numbers.FMinus - Thursday, May 19, 2016 - link
except it still pretty much is a TSMC 20nm process with the fancy 16nmFF name tagged on it.Tuvok86 - Wednesday, May 18, 2016 - link
finally a card (this or 1070) worth retiring my 7970ghz fordustwalker13 - Wednesday, May 18, 2016 - link
kudos as always for being thorough. no overclocked cards, no gameworks, a balanced set of games that have an equal share favoring either nvidia or amd, that makes a balanced review, a rarity these days and highly appreciated.stardude82 - Wednesday, May 18, 2016 - link
You know, I'm still waiting for that complete GTX 950 review.Sivar - Wednesday, May 18, 2016 - link
Is it fair to publish relative power consumption in Crysis 3 of the 1080 vs other cards when the 1080 is pushing twice the frame rate of some of those cards?Seems like a better comparison would be to lock the framerate such that the lowest-end in the list can keep up, enable vsync, and test power consumption when the cards are doing the same amount of real work.
paulemannsen - Thursday, May 19, 2016 - link
+1, do bothnick.evanson - Monday, May 23, 2016 - link
Would it not still show the same relative differences though? The net power consumptions would obviously be different but the relative differences would simply reflect process size, transistor count, clock, etc; i.e. nothing that would be particularly surprising. Nor useful, I should imagine, even to the user for whom power consumption does matter - would such a user discount a product as being a potential purchase because it does not fit within a required power window or would they examine how best to deal with the additional power requirements and heat generation? Personally I only look at the power figures to gauge how hot my office is going to get :)oobga - Wednesday, May 18, 2016 - link
Glad all the details of the founders edition are out. Almost bought one. Fortunately was able to cancel my pre-order after finding out there is nothing special about it aside from the name.Fingers crossed a good closed system water cooled 1080 comes out soon!
Marucins - Thursday, May 19, 2016 - link
Why is there no computing tests?Always Anandtech doing comp. tests and now, during the presentation of the new architecture of a sudden it disappears.
Why?
HollyDOL - Thursday, May 19, 2016 - link
Maybe because it is a _pre_view?Marucins - Monday, May 23, 2016 - link
I hope..... I count that the final test will be complete.HollyDOL - Wednesday, May 25, 2016 - link
my 2 cents bet would be it will come out together with 1070 in one big reviewMlok - Thursday, May 19, 2016 - link
I've seen this alredy in tables in some other articles, so got to ask: since when is memory clock measured in Gbps? Doesnť make any sense to me, one would say clock is measured Hz, while Gigabits per second goes for bandwidth, hm?nick.evanson - Monday, May 23, 2016 - link
The memory clock isn't used for determining processing capabilities, as such, unlike with the GPU core itself, where it is used to determine peak FLOPS, pixel read/writes, etc. In the case of memory, all that matters (on face value) is how much data can be transferred to and from it and this is indicated by the Gbps - the "memory" can shift, say, 7 Gbits per second and because the "bus" is 256 bit wide, the total bandwidth is 1792 Gb/s (or 224 GB/s). So one might ask, why not just quote the bandwidth? This used to be the case, simply because bus width across different vendors and SKUs were remarkably similar compared to the broad variation one sees now.mikable - Thursday, May 19, 2016 - link
When's the MXM version coming out?lashek37 - Friday, May 20, 2016 - link
Has Soon As AMD comes out with their card. Nvidia will unleash the GTX980T.I.😂 Nvidia do this every year lol.Ranger1065 - Monday, May 23, 2016 - link
Anandtech is a good site that I have visited frequently since the early 2000s to quench an insatiable appetite for primarily CPU&GPU reviews. However there are now sites out there, one in particular, that are doing it better than Anandtech and consequently I don't spend so much time here.It is now almost a week since the embargo on GTX1080 reviews was lifted and previews aside, there is still a deafening silence from Anandtech. Yes the apologists will argue Anandtech does a deeper review, give them time and all that but seriously when your review is this late, it begins to look like incompetence. Or perhaps you consider your reviews to be elitist, the holy grail among tech websites and that therefore any delay is acceptable? What pressing projects are the GPU staff working on that could explain this state of affairs?
GET IT TOGETHER ANANDTECH YOU USED TO BE BETTER!
vacavalier - Monday, May 23, 2016 - link
If you own a 980 Ti... No need to get overly excited/worried about upgrading to this particular GPU, unless you just have to have the "latest and "greatest". Initial testing was done vs. the GTX 980? Rather odd and goes back to my initial statement.On the other hand, if running a 970 (or below) and 980 (apparently, according to NV) or 700 series GPU, this presents a very good upgrade-path.
Time will tell what will happen with that MSRP however, once vendors start doing their add-on's; cooling solutions, factory OC's, software bundles, etc...
lakedude - Tuesday, May 24, 2016 - link
I'm shocked every day that goes by that no GTX 1080 review exists yet. The move to 14-16nm has got to be on of the most highly anticipated in recent history an all we have is a preview?? BTW I don't need a tech site to price check the 6700 CPU for me, i can do that myself.Bang4BuckPC Gamer - Wednesday, May 25, 2016 - link
I thought the main purpose of a review was to give a potential customer an idea of what to expect, and some expert analysis, and some honest judgement on a product and whether or not it's worth your money as a consumer. The card releases in two days and if you don't have a review out by then what is the bleeding pointvzortex - Wednesday, May 25, 2016 - link
"later this week" was over last week. No review?tarqsharq - Wednesday, May 25, 2016 - link
I'm kind of expecting it tomorrow, the day before the product gets into people's hands. I know pre-orders already started, but those people were probably going to buy regardless.HOOfan 1 - Thursday, May 26, 2016 - link
Not up this morning, don't they usually post content overnight? They were well behind the pack in publishing a Fury X review as well. With a review this late, I hope they do some extensive overclocking and compare it to SLi and Crossfire. Heck if they don't get the review up by tomorrow, maybe they can benchmark the 1080 in SLitarqsharq - Thursday, May 26, 2016 - link
Yeah, this is starting to get a little worrisome. Ryan mentioned getting it out sometime last week, and it's getting close to a whole extra week on top of that now...Maybe he found something interesting to test and wants to confirm before publishing? We won't know until it's posted obviously, but at least I'm not chomping at the bit till I see what AMD is offering this year.
justaviking - Friday, May 27, 2016 - link
10 days later...Today is the official "release" day...
Nothing new?
How about posting a 95% done analysis, and let us know what you're still working on? That would be a lot better that deafening silence.
Anato - Friday, May 27, 2016 - link
Where could I read analysis of new Nvidia Geforce GTX 1080?I'm asking because haven't followed other sites for long time, but I'm now so fed up with this. Broken promises of review and then nothing but silence? I will still come to Anandtech first, but I'm not going to wait for 10 days for important review!
@Ranger1065 Would you please elaborate your previous comment?
HOOfan 1 - Friday, May 27, 2016 - link
HardOCP, Tomshardware, TechPowerup, Guru3D plenty of other sites have posted reviewswira123 - Sunday, May 29, 2016 - link
anandtech is such a f***ing joke nowaday, most tech reviewer already publish their GTX 1080 & 1070 reviews today, and yet anand still stuck with 1080 PREVIEW. Hahaha what a joke......pencea - Sunday, May 29, 2016 - link
Anandtech sure is slow. Other major sites have already beginning to post reviews of the GTX 1070 while this site hasn't even posted a review of the GTX 1080 which came out days ago...Always late to the party.
HollyDOL - Wednesday, June 1, 2016 - link
While I have been advocating for AT, this hypothetic 1080 review could just go in a dust bin now. Everywhere over the web there are lots of detailed reviews for 1080 and now also for 1070. In the meantime since this preview Ryan posted two more articles. Unless his review GTX-1080 sample malfunctioned there is hardly any excuse for such a huge delay.masters_league - Tuesday, May 31, 2016 - link
Maybe he's waiting for a more recent driver to get the most performance out the card.catavalon21 - Friday, June 3, 2016 - link
May 17: "While I’ll get into architecture in much greater detail in the full article..."June 3: Still waiting for a full article
For a site that appropriately has criticized vendors for paper launches of hardware, it's starting to appear that it's happening with articles.
stardude82 - Sunday, June 5, 2016 - link
I'm still waiting for an GTX 950 review.TacoR0sado - Monday, June 6, 2016 - link
I guess the full article isn't coming.hafizmajid - Monday, June 6, 2016 - link
Still no review. One of the things I like about your review is. That it bring up incite that is not available else where. Do you have a timescale.HOOfan 1 - Monday, June 6, 2016 - link
Their delay seems to be inciting discontent.jwinter - Monday, June 6, 2016 - link
Coollakedude - Wednesday, June 8, 2016 - link
Speculation, for some reason they don't have card to test.Locut0s - Friday, June 10, 2016 - link
Starting to wonder if perhaps they found some crucial problems and have been spending this time trying to run them to ground. But IMHO that doesn't make the situation much better as in that case they should have updated us at some point with said fact.nsavop - Friday, June 10, 2016 - link
Why bother at this point? Using the exuse that such a in depth article takes time to write and telling everyone it's almost finished weeks ago is flat out dishonest especially considering its been happening on a consistent basis.Can't keep using the exuse that Anandtech's articles are very in depth and take longer, that might of been true years ago but not anymore, not when sites like Pcper and Techreport go just as in depth and by some miracle of God are able to get the article out on time.
The main site and the forums have been going downhill here for a while now so don't hold your breath on the "full review".
TacoR0sado - Wednesday, June 15, 2016 - link
Yeah I think it's safe to say at this point that the full review won't be happening.Beararam - Wednesday, June 15, 2016 - link
How to fade into obscurity, by the artists formerly known as Anandtech.Take over 4 weeks longer than some kid on youtube to review the fastest GPU to date, with the first die shrink in years ; don't bother at all to review the 960 or the S7 (completely); not sure what else you need to do, this is a pretty good way to sewer your company.
GTX 780 release date/review date: May 23, 2013/ May 23, 2013
GTX 980 release date/review date: Sep 18, 2014/ Sep 18, 2014
GTX 1080 release date/review date: May 27, 2016/ ???
none12345 - Wednesday, June 22, 2016 - link
So, 3 weeks later and no review? A week, 2 weeks, sure, but 3/4 weeks.....something seems odd!sheppkvn - Thursday, June 23, 2016 - link
At this point I think it's safe to say there will be no review for the 1080. Most likely no 1070 review either considering the cards have been out for a while now.