SemiAccurate Forums  

 
Go Back   SemiAccurate Forums > Main Category > GPUs

GPUs Talk about graphics, cards, chips and technologies

Reply
 
Thread Tools Display Modes
  #911  
Old 04-16-2012, 10:16 PM
sdlvx sdlvx is offline
640k who needs more?
 
Join Date: Nov 2011
Posts: 983
sdlvx is on a distinguished road
Default

Quote:
Originally Posted by Shadow Concept View Post
Do you think this will lead to AMD moving the performance segment goalposts like NVidia did?

Before Kepler, Pitcairn was the peak of "pure gaming" oriented cards, and of course Tahiti was the top card with the immense feature set. Then NV moves the "pure gaming" goalpost by creating a much bigger "pure gaming" card, not far from the size of Tahiti, but with the same design goals as Pitcairn.

From my observations, GCN still has quite the upper hand, so if AMD were to create a card close to or the same size as Tahiti/Kepler, but with Pitcairn/680 style design goals (+ any slight improvements they've made in the meantime), they should, at the same die size, stomp the current Kepler into the ground on all metrics.

I guess the problem with that is it would push the "full featured" top card in a die size area that AMD might prefer to avoid. But with the size and feature set of BigGK, I can see them doing their usual trick of building a die much, much smaller, getting 90% of the performance at 2/3rd the price and laughing all the way to the bank.
I think moving the performance segment goal posts and dropping GPGPU from gaming cards is the smartest thing to do if all you do is make GPUs given the cost of TSMC nodes as they shrink. As I said before, NV has been pushing CUDA and GPGPU for a very long time.

http://www.beyond3d.com/content/news/304

We are coming up on the five year anniversary of CUDA. If there hasn't been a killer consumer application using CUDA in five whole years, there's not ever going to be one. GK104 and NV making GK104 GTX 680 makes me feel like NV is finally admitting defeat in the consumer GPGPU space. Don't get me wrong, they've got GPGPU covered in other markets quite extensively, but for the consumer it doesn't matter.

It looks like GPGPU in the consumer space is a bad investment. But it's definitely not for AMD. AMD is aiming for HSA and they want the GPU and CPU to work very closely with each other. Intel and NV have nothing to gain from doing that with each other. Intel and NV are about to start competing with each other when Knight's Corner comes around.

Given AMD's goals, I don't see them dropping GPGPU in consumer cards, while it makes sense for NV. The work with the GPU architectures will pay off hugely if AMD plays everything right in the end.

All you have to do is look at AMD's existing hardware to know what direction they're going towards. Bulldozer sucks at floating point and is awesome at integer math. GCN is awesome at floating point and sucks at integer math. It only makes sense given AMD's stated goals of HSA that these two product derivatives should eventually exist on the same chip and work to execute the same code for impressive performance.

The only remotely decent real world app for consumers (I'm not talking about F@H and stuff) was CUDA transcoding, and GTX680 has NVENC. If they've replaced their strongest GPGPU consumer app with something like that they've abandoned the consumer market.
Reply With Quote
  #912  
Old 04-17-2012, 02:36 AM
prender prender is offline
word
 
Join Date: Jan 2010
Posts: 29
prender is on a distinguished road
Default

I've got a what if scenario that I've done a little thinking about. It may be just downright illogical and stupid thinking on my part, as I'm not exactly as technical minded as some of you guys on here seem to be, but just humour me...

What if the GTX680 wasn't exactly the real "Kepler" that nVIDIA was going to release? What if the GTX680 was a backup plan, if the real Kepler cards were too difficult at this point in time for nVIDIA to manufacture?

Is that even possible? I don't know, just a feeling I've got. Probably nothing, maybe indigestion. Too much bacon, perhaps. What if the entire Kepler series were supposed to still have their GPGPU functionality, but they decided to keep a back up plan - a card that was stripped of most of its compute fuctions, clocked high, in order to try and compete with Tahiti, at least from a gaming perspective, and to keep their investors/share holders, etc happy, until they can get their real Kepler cards out. What if it was just a gamble for nVIDIA that happened to pay off. As in the GTX680 performing above their own expectations.

This isn't intended to be trolling, or baiting or anything of that sort. I was just in the bath the other day, and the idea came to me. I come up with all sorts of strange thoughts and ideas whenever the bath, or the shower, is involved.

Last edited by prender; 04-17-2012 at 02:42 AM.
Reply With Quote
  #913  
Old 04-17-2012, 02:44 AM
Relayer Relayer is offline
2^10
 
Join Date: Jan 2010
Location: Christchurch NZ
Posts: 1,034
Relayer will become famous soon enough
Default

Quote:
Originally Posted by prender View Post
I've got a what if scenario that I've done a little thinking about. It may be just downright illogical and stupid thinking on my part, but just humour me...

What if the GTX680 wasn't exactly the real "Kepler" that nVIDIA was going to release? What if the GTX680 was a backup plan, if the real Kepler cards were too difficult at this point in time for nVIDIA to manufacture?

Is that even possible? I don't know, just a feeling I've got. Probably nothing, maybe indigestion. Too much bacon, perhaps. What if the entire Kepler series were supposed to still have their GPGPU functionality, but they decided to keep a back up plan - a card that was stripped of most of its compute fuctions, clocked high, in order to try and compete with Tahiti, at least from a gaming perspective, and to keep their investors/share holders, etc happy, until they can get their real Kepler cards out. What if it was just a gamble for nVIDIA that happened to pay off. As in the GTX680 performing above their own expectations.

This isn't intended to be trolling, or baiting or anything of that sort. I was just in the bath the other day, and the idea came to me. I come up with all sorts of strange thoughts and ideas whenever the bath, or the shower, is involved.
I would think the GK104 is exactly as planned. Probably clocked a little higher than if the GK110 would have come along as planned. Reduced compute functionality in the 2nd level chips isn't a new thing.
Reply With Quote
  #914  
Old 04-17-2012, 02:49 AM
Shadow Concept Shadow Concept is offline
8-bit overflow
 
Join Date: Jul 2009
Location: 'Strail-ya
Posts: 390
Shadow Concept is on a distinguished road
Default

Quote:
Originally Posted by sdlvx View Post
I think moving the performance segment goal posts and dropping GPGPU from gaming cards is the smartest thing to do if all you do is make GPUs given the cost of TSMC nodes as they shrink. As I said before, NV has been pushing CUDA and GPGPU for a very long time.

http://www.beyond3d.com/content/news/304

We are coming up on the five year anniversary of CUDA. If there hasn't been a killer consumer application using CUDA in five whole years, there's not ever going to be one. GK104 and NV making GK104 GTX 680 makes me feel like NV is finally admitting defeat in the consumer GPGPU space. Don't get me wrong, they've got GPGPU covered in other markets quite extensively, but for the consumer it doesn't matter.

It looks like GPGPU in the consumer space is a bad investment. But it's definitely not for AMD. AMD is aiming for HSA and they want the GPU and CPU to work very closely with each other. Intel and NV have nothing to gain from doing that with each other. Intel and NV are about to start competing with each other when Knight's Corner comes around.

Given AMD's goals, I don't see them dropping GPGPU in consumer cards, while it makes sense for NV. The work with the GPU architectures will pay off hugely if AMD plays everything right in the end.

All you have to do is look at AMD's existing hardware to know what direction they're going towards. Bulldozer sucks at floating point and is awesome at integer math. GCN is awesome at floating point and sucks at integer math. It only makes sense given AMD's stated goals of HSA that these two product derivatives should eventually exist on the same chip and work to execute the same code for impressive performance.

The only remotely decent real world app for consumers (I'm not talking about F@H and stuff) was CUDA transcoding, and GTX680 has NVENC. If they've replaced their strongest GPGPU consumer app with something like that they've abandoned the consumer market.
You've nicely summed up what I've been feeling for a while. However unlike Intel, NVidia has no advantage over AMD in the part they compete against - in fact I think the opposite - as I said before, the fact that NVidia's stripped out gaming card has to be almost the same size as AMDs full feature card, with significantly higher clocks (and a boost on top of that) just to beat it by a small margin speaks volumes about the engineering gap between the two. If AMD were to strip out the extra features and remove 128bits of memory controller in exchange for higher clocked memory as well as increasing the clocks to match 680, then you would have a card at 680's performance at a much smaller size.

Nvidia has made a win here with product placement, but considering Kepler is supposed to be some miracle architecture - it still is a long way behind in many areas when you compare it to the competition more closely. (I'm talking about the architecture itself, not 680 vs 7970). People at NVidia must be getting increasingly concerned about this, I can't be the only one that sees it.
__________________
CPU: Intel i7 3930K + Swiftech Apogee XT, GFX: HIS 6970@950mhz + EK Waterblock, Mobo: ASUS RAMPAGE IV EXTREME RAM: 8x8gb G.Skill Ripjaws Z 2133mhz, Sound: SB ZxR + Audio Technica ATH-AD700, Case: Corsair Obsidian 900D, PSU: Enermax Revolutions85+ 1250w, Main Drive: 2x256gb OCZ Vertex 4 RAID 0, Storage: 2x4tb + 2x3tb RAID 1, OS: Windows 8.1/Ubuntu 13.10/Gentoo triple boot
Reply With Quote
  #915  
Old 04-17-2012, 06:35 AM
esrever esrever is offline
8-bit overflow
 
Join Date: Apr 2012
Location: Cloud 9
Posts: 483
esrever is on a distinguished road
Default

I doubt nvidia can make their big compute card efficient. The 680 really draws about as much power as the 7970 and its performance/watt isn't nearly as good as the 7870.

The 7870 is about 30% more efficient than the 7970 in per/watt. 7970 with about 25% more performance and 60% more power consumption
the 7970 is about 70% larger than the 7870.
numbers taken from tpu Asus OC review for stock 7970 and 7870.

going from the gk104 to gk110 is about the same die area difference between the 7870 to 7970. So gk110 would be 25% faster than the gk104 and consume 60% more power.
this is a minimal as the 7870 is actually faster than the gk104 in gpgpu.

so a 312w gpu with a 520mm die for the gk110. nvidia will probably market it as 250w. Might be the 480 all over again if nvidia can't get their chip to be more efficient than AMD. considering the 195w 680 is already hot, I wonder how hot the 300w+ gk110 will be like.

If Nvidia wants a manageable TPD chip, it would probably be barely faster than the 680 in gaming due to having to have much lower clock speeds.
Reply With Quote
  #916  
Old 04-17-2012, 07:03 AM
boxleitnerb boxleitnerb is offline
8-bit overflow
 
Join Date: Dec 2011
Posts: 373
boxleitnerb is on a distinguished road
Default

Not really, it depends on the game. In many games the 680 uses quite a bit less power than the 7970. That a 500+mm2 die with strong DP performance cannot reach the efficiency of Pitcairn is clear.
I'm a bit more optimistic, though. Firstly, I expect GK110 to be 40% faster than the 680. Secondly, the GPU boost will come in handy in using the TDP headroom better.
Reply With Quote
  #917  
Old 04-17-2012, 07:20 AM
esrever esrever is offline
8-bit overflow
 
Join Date: Apr 2012
Location: Cloud 9
Posts: 483
esrever is on a distinguished road
Default

It doesn't matter what the 680 pulls compared to the 7970.
Im comparing the 7970 to the 7870 and translating it the the 680 and the nvidia compute card.

efficiency goes down 30% and power consumption goes up by 60% moving from gaming to compute from AMD. I don't see how nvidia can magically make it a more efficient transition considering the 680 is farther from the gk110 than 7870 is from 7970 in compute.

http://www.techpowerup.com/reviews/N...TX_680/25.html
680 consumes almost the same amount power as the 7970 generally. 195w nvidia TPD = 250w AMD TPD= 22% less . 312w gk110 is nvidia TPD, in AMD TPD it would be 410w.

No wonder AMD didn't want to make a chip that big and settled with a smaller gaming chip and a smaller compute chip.
Reply With Quote
  #918  
Old 04-17-2012, 07:26 AM
distinctively's Avatar
distinctively distinctively is offline
2^11
 
Join Date: Sep 2009
Location: Waterloo, Ontario
Posts: 2,576
distinctively will become famous soon enoughdistinctively will become famous soon enough
Default

Quote:
Originally Posted by boxleitnerb View Post
Not really, it depends on the game. In many games the 680 uses quite a bit less power than the 7970. That a 500+mm2 die with strong DP performance cannot reach the efficiency of Pitcairn is clear.
I'm a bit more optimistic, though. Firstly, I expect GK110 to be 40% faster than the 680. Secondly, the GPU boost will come in handy in using the TDP headroom better.
That's more like it! Some hard predictions coming out. I'll just put my clean record of never being right and predict a 25% increase of GK110 over the GK104. I do agree that GPU boost might come in handy to control TDP.
__________________
C is for cookie. Is that good enough for you?
Reply With Quote
  #919  
Old 04-17-2012, 07:50 AM
kalelovil kalelovil is offline
640k who needs more?
 
Join Date: Apr 2011
Location: Auckland, New Zealand
Posts: 747
kalelovil is on a distinguished road
Default

Quote:
Originally Posted by esrever View Post
Im comparing the 7970 to the 7870 and translating it the the 680 and the nvidia compute card.

efficiency goes down 30% and power consumption goes up by 60% moving from gaming to compute from AMD. I don't see how nvidia can magically make it a more efficient transition considering the 680 is farther from the gk110 than 7870 is from 7970 in compute.
Compute functionality isn't the only thing explaining Tahiti's relative inefficiency.
Charlie has mentioned that Tahiti has a considerable amount of redundancy (and probably takes a conservative approach to more than just the core clock) since it was the first ASIC on TSMC's 28nm process and AMD wanted to reduce their risks as much as possible.

Although if what Charlie has heard is correct, and Nvidia is having significant problems producing a chip not even 300m2 on TSMC's 28nm process, then big Kepler is still quite a way off (or will never reach the mass gaming market).

Last edited by kalelovil; 04-17-2012 at 07:52 AM.
Reply With Quote
  #920  
Old 04-17-2012, 07:52 AM
boxleitnerb boxleitnerb is offline
8-bit overflow
 
Join Date: Dec 2011
Posts: 373
boxleitnerb is on a distinguished road
Default

Quote:
Originally Posted by esrever View Post
It doesn't matter what the 680 pulls compared to the 7970.
Im comparing the 7970 to the 7870 and translating it the the 680 and the nvidia compute card.

efficiency goes down 30% and power consumption goes up by 60% moving from gaming to compute from AMD. I don't see how nvidia can magically make it a more efficient transition considering the 680 is farther from the gk110 than 7870 is from 7970 in compute.

http://www.techpowerup.com/reviews/N...TX_680/25.html
680 consumes almost the same amount power as the 7970 generally. 195w nvidia TPD = 250w AMD TPD= 22% less . 312w gk110 is nvidia TPD, in AMD TPD it would be 410w.

No wonder AMD didn't want to make a chip that big and settled with a smaller gaming chip and a smaller compute chip.
One game (Crysis 2) != generally.
Tahiti is so "slow" compared to Pitcairn because the front end is holding it back. Kepler is scaling well, it has a better front end. I believe we will see a good scaling with additional units from GK110. TDP is the big question. 250W, 270W, 300W?
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off


Forum Jump


All times are GMT -5. The time now is 10:47 AM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.
SemiAccurate is a division of Stone Arch Networking Services, Inc. Copyright 2009 Stone Arch Networking Services, Inc, all rights reserved.