Page 8 of 429 FirstFirst ... 6789101858108 ... LastLast
Results 71 to 80 of 4288

Thread: Polaris 10 size / performance estimation

  1. #71
    Quote Originally Posted by DCO View Post
    We need to remember that Pitcairn is already old compared to the latest GCN iteration, less geometry processors, tesellators, ACEs, smaller VCE, uvd, scaler, no memory compression (but perhaps AMD will reduce the memory bus), no CF xdma, no trueAudio... and who knows if the new features increase the number of transistors.

    Doesn't Pitcairn also make room for a 256b interface?

  2. #72
    Banned
    Join Date
    Oct 2010
    Posts
    490
    Quote Originally Posted by Z O X View Post
    Shouldn't we devide the number of SP's with, at least, four?
    No. Even though Nvidia and AMD architectures differ AMD needs to improve shader efficiency to be competitive with Nvidia in terms of perf/watt and perf/sqmm. AMD Polaris seems to be a step in that direction. I think AMD will match Maxwell with Polaris in terms of perf/cuda core vs perf/sp. But whether they can do better than that and compete with Pascal remains to be seen. I think AMD's goal of console quality performance in a thin and light notebook hints at PS4 class performance at 25-30w.

    My guess is the notebook version will be clocked at 900 Mhz. I am expecting a 1024 sp chip at 100-110 sq mm with a 128 bit GDDR5 bus (7 Gbps = 112 GB/s ) or 128 bit GDDR5X bus (5 Gbps = 160 GB/s). 900x2x1024 = 1.8 TFLOPs (matches PS4 performance). For desktop I think the clocks will be 1.1-1.2 Ghz.

  3. #73
    Senior Member
    Join Date
    Oct 2010
    Posts
    374
    If AMD cut as much out of their gpu's as maxwell they would be about equal in terms of perf/watt and perf/sqmm, pascal from my understanding needs to add DP back, now if 290/390 were cut down as much as 970/980 thet would be much closer.

    look at nano with its DP cut down more than 390/x

  4. #74
    Quote Originally Posted by Queamin View Post
    If AMD cut as much out of their gpu's as maxwell they would be about equal in terms of perf/watt and perf/sqmm, pascal from my understanding needs to add DP back, now if 290/390 were cut down as much as 970/980 thet would be much closer.

    look at nano with its DP cut down more than 390/x
    Hardware scheduling will likely go back in as well as various compute features.

  5. #75
    Senior Member
    Join Date
    Oct 2010
    Posts
    374
    I am not sure if they would have had time to put Hardware scheduling in, it all depends when they started pascal or could they changed it at such a late time as pascal could have been started before dx12 started?.

    Amd would have started with it in polaris as it is in the generation of gpu's before it.

  6. #76
    Senior Member
    Join Date
    Dec 2012
    Posts
    7,494
    Pascal is clearly a rush job of "how can we make Maxwell do Neural network and compute (FP64)?" To me.
    -Q

  7. #77
    Senior Member
    Join Date
    Sep 2012
    Posts
    3,316
    Quote Originally Posted by testbug00 View Post
    Pascal is clearly a rush job of "how can we make Maxwell do Neural network and compute (FP64)?" To me.
    I think 28nm Maxwell was the rush job... "20nm is cancelled, what can we hack out of our next gen arch to port it back to 28nm?"

  8. #78
    Senior Member
    Join Date
    Jan 2010
    Posts
    1,903
    Quote Originally Posted by pTmd View Post
    CUDA cores and SP in this case is just the per-clock single precision shader capacity. How irrelevant could it be if one is proven to achieve the same level of graphics performance with less circuity?
    It's two different uarchs. It's like saying AMD is better because they get their performance at lower clocks. Or nVidia is better because they clock higher. There are too many variables to pick one thing and say they are so much better because of this.

  9. #79
    Senior Member
    Join Date
    Dec 2012
    Posts
    7,494
    Quote Originally Posted by NTMBK View Post
    I think 28nm Maxwell was the rush job... "20nm is cancelled, what can we hack out of our next gen arch to port it back to 28nm?"
    I think they had the concept down and when they were going to start designing it or shortly after they started they basically were told "20nm is garbage for GPUs and Apple is going to take most of the volume anyways".
    -Q

  10. #80
    Senior Member
    Join Date
    Dec 2012
    Posts
    7,494
    Quote Originally Posted by Relayer View Post
    It's two different uarchs. It's like saying AMD is better because they get their performance at lower clocks. Or nVidia is better because they clock higher. There are too many variables to pick one thing and say they are so much better because of this.
    Obviously Nvidia's architecture is better because you get their CUDA software in hardware. Each CUDA CORE has a whole copy of CUDA software written in it.

    Obviously AMD's architecture is better because it's GCN. Which is 3 letters, like CPU, GPU, APU, ARM, AMD, etc. 3 letters in caps rule what kind of designs are done.
    -Q

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
WordPress Appliance - Powered by TurnKey Linux