Test Setup

Test Setup
Processor AMD A8-7650K
2 Modules, 4 Threads
3.3 GHz Base, 3.7 GHz Turbo
95W, MSRP $105
Motherboard GIGABYTE F2A88X-UP4
DRAM G.Skill RipjawsZ 4x4GB DDR3-2133 9-11-10
Low End GPU Integrated
ASUS R7 240 2GB DDR3
Dual Graphics with R7 240
Mid Range GPU MSI R9 285 Gaming 2GB
MSI GTX 770 Lightning 2GB
High End GPU MSI R9 290X Gaming LE 4GB
ASUS GTX 980 Strix 4GB
Power Supply OCZ 1250W Gold
Storage Drive Crucial MX200 1TB
Operating System Windows 7.1 64-bit, Build 7601
CPU Cooler Cooler Master Nepton 140XL CLC

Many thanks to...

We must thank the following companies for kindly providing hardware for our test bed:

Thank you to AMD for providing us with the R9 290X 4GB GPUs.
Thank you to ASUS for providing us with GTX 980 Strix GPUs and the R7 240 DDR3 GPU.
Thank you to ASRock and ASUS for providing us with some IO testing kit.
Thank you to Cooler Master for providing us with Nepton 140XL CLCs.
Thank you to Corsair for providing us with an AX1200i PSU.
Thank you to Crucial for providing us with MX200 SSDs.
Thank you to G.Skill and Corsair for providing us with memory.
Thank you to MSI for providing us with the GTX 770 Lightning GPUs.
Thank you to OCZ for providing us with PSUs.
Thank you to Rosewill for providing us with PSUs and RK-9100 keyboards.

AMD A8-7650K Overclocking

Methodology

Our standard overclocking methodology is as follows. We select the automatic overclock options and test for stability with PovRay and OCCT to simulate high-end workloads. These stability tests aim to catch any immediate causes for memory or CPU errors.

For manual overclocks, based on the information gathered from previous testing, starts off at a nominal voltage and CPU multiplier, and the multiplier is increased until the stability tests are failed. The CPU voltage is increased gradually until the stability tests are passed, and the process repeated until the motherboard reduces the multiplier automatically (due to safety protocol) or the CPU temperature reaches a stupidly high level (100ºC+). Our test bed is not in a case, which should push overclocks higher with fresher (cooler) air.

Overclock Results

The base frequency of the A8-7650K goes up to 3.7 GHz in the highest turbo mode, and we were able to jump right into 4.0 GHz without much problem. That being said, our sample did not move much above that, giving 4.1 GHz but at 4.2 GHz we noticed that the CPU frequency would decrease during sustained workloads, resulting in a zero performance increase overall.

New Testing Methodology Office and Web Performance
Comments Locked

177 Comments

View All Comments

  • Gigaplex - Tuesday, May 12, 2015 - link

    Mantle for AMD discrete GPUs runs on Intel CPUs so is a completely valid test for CPU gaming performance.
  • CPUGPUGURU - Tuesday, May 12, 2015 - link

    Mantle is developed as AMD GCN API so don't go telling us its optimized for Intel or Nvidia because its NOT! Mantle is DOA, dead and buried, stop pumping a Zombie API.
  • silverblue - Wednesday, May 13, 2015 - link

    You've misread Gigaplex's comment, which was stating that you can run an AMD dGPU on any CPU and still use Mantle. It wasn't about using Mantle on Intel iGPUs or NVIDIA dGPUs, because we know that functionality was never enabled.

    Mantle isn't "dead and buried"; sure, it may not appear in many more games, but considering it's at the very core of Vulkan... though that could be just splitting hairs.
  • TheJian - Friday, May 15, 2015 - link

    Incorrect. The core of Mantle sales pitches was HLSL. You only think Mantle is Vulkan because you read Mantle/Vulkan articles on Anandtech...LOL. Read PCPER's take on it, and understand how VASTLY different Vulkan (Headed by Nvidia's Neil Trevett, who also came up with OpenGL ES BTW) is from Mantle. At best AMD ends up equal here, and worst Nvidia has an inside track always with the president of Khronus being the head of Nvidia's mobile team too. That's pretty much like Bapco being written by Intel software engineers and living on Intel Land across the street from Intel itself...ROFL. See Van Smith Articles on Bapco/sysmark etc and why tomshardware SHAMEFULLY dismissed him and removed his name from his articles ages ago

    Anandtech seems to follow this same path of favoritism for AMD these days since 660ti article - having AMD portal etc no Nvidia portal - mantle lovefest articles etc, same reason I left toms years ago circa 2001 or so. It's not the same team at tomshardware now, but the damage done then is still in many minds today (and shown at times in forum posts etc). Anandtech would be wise to change course, but Anand isn't running things now, and doesn't even own them today. I'd guess stock investors in the company that bought anandtech probably hold massive shares in sinking AMD ;) But that's just a guess.

    http://www.pcper.com/reviews/General-Tech/GDC-15-W...
    Real scoop on Vulkan. A few bits of code don't make Vulkan Mantle...LOL. If it was based on HLSL completely you might be able to have a valid argument but that is far from the case here. It MIGHT be splitting hairs if this was IN, but it's NOT.

    http://www.pcper.com/category/tags/glnext
    The articles on glNext.:
    "Vulkan is obviously different than Mantle in significant ways now, such as its use of SPIR-V for its shading language (rather than HLSL)."
    CORE? LOL. Core of Vulkan would be HLSL and not all the major changes due to the GROUP effort now.

    Trevett:
    "Being able to start with the Mantle design definitely helped us get rolling quickly – but there has been a lot of design iteration, not the least making sure that Vulkan can run across many different GPU architectures. Vulkan is definitely a working group design now."

    Everything that was AMD specific is basically gone as is the case with DX12 (mantle ideas, but not direct usage). Hence NV showing victories in AMD's own mantle showcase now (starswarm)...ROFL. How bad is that? Worse NV was chosen for DX12 Forza Demo which is an AMD console game. Why didn't MS chose AMD?

    They should have spent the time they wasted on Mantle making DX12/Vulkan driver advances, not to mention DX11 driver improvements which affect everything on the market now and probably for a while into the future (until win10 takes over at least if ever if vulkan is on billions of everything else first), rather than a few mantle games. Nvidia addressed the entire market with their R&D while AMD wasted it on Mantle, consoles & apu. The downfall of AMD started with a really bad ATI price and has been killing them since then.
  • TheJian - Friday, May 15, 2015 - link

    Mantle is almost useless for FAST cpus and is dead now (wasted R&D). It was meant to help AMD weak cpus which only needed to happen because they let guys like Dirk Meyer (who in 2011 said it was a mistake to spend on anything but CORE cpu/gpu, NOT APU), & Keller go ages ago. Adding Papermaster might make up for missing Meyer though. IF they would NOT have made these mistakes, we wouldn't even have needed Mantle because they'd still be in the cpu race with much higher IPC as we see with ZEN. You have no pricing power in APU as it feeds poor people and is being crushed by ARM coming up and Intel going down to stop them. GAMERS (and power users) will PAY a premium for stuff like Intel and Nvidia & AMD ignored engineers who tried to explain this to management. It is sad they're now hiring them back to create again what they never should have left to begin with. The last time they made money for the year was Athlon's and high IPC. Going into consoles instead of spending on CORE products was a mistake too. Which is why Nvidia said they ignored it. We see they were 100% correct as consoles have made amd nothing and lost the CPU & GPU race while dropping R&D on both screwing the future too. The years spent on this crap caused AMD's current problems for 3yrs on cpu/gpu having zero pricing power, selling off fabs, land, laying off 1/3 of employees etc. You can't make a profit on low margin junk without having massive share. Now if AMD had negotiated 20%+ margins from the get-go on consoles, maybe they'd have made money over the long haul. But as it stands now they may not even recover R&D and time wasted as mobile kills consoles at 1/2 through their life with die shrinks+revving yearly, far cheaper games and massive numbers sold yearly that is drawing devs away from consoles.

    Even now with 300's coming (and only top few cards are NOT rebadges which will just confuse users and piss them off probably), Nvidia just releases a faster rehash of tech waiting to answer and again keep a great product down in pricing. AMD will make nothing from 300's. IF they had ignored consoles/apus they would have ZEN out already (2yrs ago? maybe 3?) and 300's would have been made on 28nm optimized possibly like maxwell squeezed out more perf on the same process 6 months ago. Instead NV has had nearly a year to just pile up profits on an old process and have an answer waiting in the wings (980ti) to make sure AMD's new gpu has no pricing power.

    Going HBM when it isn't bandwidth starved is another snafu that will keep costs higher, especially with low yields on that and the new process. But again because of lack of R&D (after blowing it on consoles/apu), they needed HBM to help drop the wattage instead of having a great 28nm low watt alternative like maxwell that can still milk a very cheap old DDR5 product which has more than enough bandwidth as speeds keep increasing. HBM is needed at some point, just not today for a company needing pofits that has no cash to burn on low yields etc. They keep making mistakes and then having to make bad decisions to make up for them that stifle much needed profits. They also need to follow Nvidia in splitting fp32 from fp64 as that will further cement NV gpus if they don't. When you are a professional at both things instead of a jack of all trades loser in both, you win in perf and can price accordingly while keeping die size appropriate for both.

    Intel hopefully will be forced back to this due to ZEN also on the cpu side. Zen will cause Intel to have to respond because they won't be able to shrink their way to keeping the gpu (not with fabs catching Intel fabs) and beat AMD with a die fully dedicated to CPU and IPC. Thank god too, I've been saying AMD needed to do this for ages and without doing it would never put out another athlon that would win for 2-3yrs. I'm not even sure Zen can do this but at least it's a step in the right direction for profits. Fortunately for AMD an opening has been created by Intel massively chasing ARM and ignoring cpu enthusiasts and desktop pros. We have been getting crap on cpu side since AMD exited, while Intel just piled on gpu side which again hurt any shot of AMD making profits here...LOL. They don't seem to understand they make moves that screw themselves longer term. Short term thinking kills you.
  • ToTTenTranz - Wednesday, May 13, 2015 - link

    Yes, and the APU being reviewed, the A8-7650K also happens to be "AMD ONLY", so why not test mantle? There's a reasonable number of high-profile games that support it:

    - Battlefield 4 and Hardline
    - Dragon Age: Inquisition
    - Civilization: Beyond Earth
    - Sniper Elite III

    Plus another bunch coming up, like Star Wars Battlefront and Mirror's Edge.

    So why would it hurt so much to show at least one of these games running Mantle with a low-specced CPU like this?

    What is anandtech so afraid to show, by refusing to test Mantle comparisons with anything other than >$400 CPUs?
  • V900 - Thursday, May 14, 2015 - link

    There isn't anyth to be scared off, but Mantle is only available on a handful of games, and beyond those it's dead and buried.

    Anandtech doesn't run Mantle benchmarks for the same reason they don't review AGP graphics cards: It's a dead technology aside from the few people who currently use it...
  • chizow - Tuesday, May 12, 2015 - link

    I seriously considered an A10-7850K Kaveri build last year around this time for a small power-efficient HTPC to stream DVR'd shows from my NAS, but in the end a number of issues steered me away:

    1) Need for chassis, PSU, cooler.
    2) Lack of good mini-ITX options at launch.
    3) Not good enough graphics for gaming (not a primary consideration anyways, but something fast enough might've changed my usage patterns and expectations).

    Sadly, this was the closest I've gotten to buying an AMD CPU product in a long, long time but ultimately I went with an Intel NUC that was cheaper to build, smaller form factor, and much less power usage. And all I gave up was GPU performance that wasn't realistically good enough to change my usage patterns or expectations anyways.

    This is the problem AMD's APUs face in the marketplace today though. That's why I think AMD made a big mistake in betting their future on Fusion, people just aren't willing to trade fast efficient or top-of-the-line CPUs for a mediocre CPU/GPU combo.

    Today, there's even bigger challenges out there for AMD. You have Alienware that offers the Alpha with an i3 and GTX 860+M that absolutely destroys these APUs in every metric for $500, $400 on sale, and it takes care of everything from chassis, PSU, cooling, even Windows licensing. That's what AMD is facing now though in the low-end PC market, and I just can't see them competing with that kind of performance and value.
  • silverblue - Tuesday, May 12, 2015 - link

    I would have opted for the A8-7600 instead of the 7850K, though I do admit it was very difficult to source back then. 65W mode doesn't perform much faster than 45W mode. I suppose it's all about what you want from a machine in the end, and AMD don't make a faster CPU with weaker iGPU which might make more sense.

    The one thing stopping AMD from releasing a far superior product, in my eyes, was the requirement to at least try to extract as much performance from a flawed architecture so they could say it wasn't a complete waste of time.
  • galta - Tuesday, May 12, 2015 - link

    +1
    Fusion was not only poor strategy, it was poor implementation.
    Leaving aside the discussion of the merits integrated GPU, if AMD had done it right we would have seen Apple adopting their processor on their Macbook series, given their obsession with slim hardware, with no discrete graphics.
    Have we seen that? No.
    You see, even though Intel has never said that integrated GPU was the future, the single most important customer on that market segment was claimed by them.

Log in

Don't have an account? Sign up now