Comments Locked

7 Comments

Back to Article

  • britjh22 - Thursday, July 31, 2014 - link

    Any word on actual vendor availability? Particularly for the A8-7600, which was the most interesting component for myself and many others at the initial Kaveri launch (even more so now that list price is reduced). This is the chip I'd probably use for all build for family & friends due to it's great balance.
  • konroh77 - Thursday, July 31, 2014 - link

    Agreed! I have done several builds that would have been nice with the A8-7600, and I either had to go with an older APU, or an i3 depending on the purpose of the machine. Would be nice if I could actually buy one of these chips (6 months+ after it was announced)!
  • Stuka87 - Thursday, July 31, 2014 - link

    Little misleading how they lump CPU and GPU cores together like that. But out side of that, pretty excited for these.
  • Death666Angel - Thursday, July 31, 2014 - link

    Totally misleading how they tell you exactly what is what in their core count.
  • mickulty - Thursday, July 31, 2014 - link

    They do refer to them as *compute* cores, and make the breakdown clear. Also they break down the GPU fairly, rather than quoting the number of shaders (512 on the 7850k). How else could they be expected to emphasise the parallel computing capabilities of their APUs when HSA is used properly?
  • name99 - Thursday, July 31, 2014 - link

    I'm more interested in how close do these come to the real promise of HSA. Are we there, or are we still on the way there? In particular
    - shared address space? If I pass a pointer from the CPU to the GPU will it just work?
    - transparent coherence between CPU and GPU?
    - interrupt support on the GPU? (So I can time slice the GPU like a normal OS/CPU combo)
  • name99 - Thursday, July 31, 2014 - link

    OK, after some further reading around I see that the answer to the first two questions is yes --- memory IS finally done right here. I assume there is some protocol under the covers that "reflects" GPU MMU faults (and perhaps TLB misses?) to the CPU to handle the problem, rather than having the GPU deal with that, but that's an implementation detail.

    I still don't know what the interrupt support on the GPU side is. The interrupt side is important because once it is ALSO in place, we get to a very interesting situation where the developer+OS can basically treat GPUs as juts like CPUs, only running a different ISA. The developer can spawn threads, or enqueue tasks, that are targeted at the GPU ISA rather than the x86 ISA and everything will just work. (We have this already).
    AND the OS will be able to schedule long-running tasks between GPUs, time-sharing them as necessary. (This is the part that I'm not sure is in place yet. Obviously context-switching a GPU is a more heavy-weight operation than context-switching a CPU, but you want it to be possible otherwise you have to enforce artificial restrictions on the code that runs on the GPU to ensure it doesn't run too long --- and the whole point is to get rid of these restrictions.)

Log in

Don't have an account? Sign up now