Page 68 of 153 FirstFirst ... 1858666768697078118 ... LastLast
Results 671 to 680 of 1524

Thread: Any news on Kepler?

  1. #671
    Senior Member
    Join Date
    Sep 2009
    Posts
    3,026
    Quote Originally Posted by Drunkenmaster View Post

    Have we seen anything official anywhere that actually makes it seem like the hotclocks really are gone?
    I was going to ask the same question earlier. I'm pretty sure that nothing has been confirmed. The whole topic of different shaders could really affect our outlook of the upcoming chips.

    A little more info would be nice here. This is like trying to solve a geometry problem with too many variables missing. Isn't there any way we can force some information out of these people. Surely some of these execs are having affairs and are just prime for blackmail.
    When people tell me "You're going to regret that in the morning" I sleep in till noon because I'm a problem solver!

  2. #672
    The thing that just jumps out at me is, with Fermi supposedly such a huge change, another radical change for Kepler seems unlikely, and dropping hot clocks and either going many small shaders, or same bigger shaders but they do much more work per clock is, huge.

    All the leaks are just random people making a table based on one rumour or another, nothing more or less than that.

    I still wouldn't be surprised to see the "usual" Nvidia take on things, Gk100 being planned to be a 1024 shader part, gk104 to be 768, and potentially either drop these down purely on poor yields or like the GTX 280, maybe they'll drop a few for a smaller core to start with(and they have pap yields as well )

    That seems the "easy" and natural route, but who knows. I both won't be surprised if we see hotclocks, missing clock speed targets and a core that is too big and too difficult to get good yields on for a while. I also wouldn't be surprised if Nvidia finally realised hot clocks and aiming for massive hotclock speeds this gen wouldn't kill them power/leakage wise. At that point the question would become, how early did they realise that and how much is it a bodge job of getting anything working that they can without a hot clock, or if they saw this coming a couple years ago maybe they've both dropped them and decided to finally go with a more area efficient design.........

    We'll see, I'm still seeing rumours of mobile parts coming first, and coming in Ivybridge laptops and being ready for April-may, and we've seen Nvidia mention Ivy and kepler for 6 months actually on the record. Though they tried to spin it as Ivy delayed so the low end is just waiting on ivybridge...... a useful excuse but if it was ready months earlier(as they've tried to suggest before) why not put the low end chips in other non Ivy laptops...... exactly, its BS. Nvidia have known for a LONG time that the mobile wasn't going to be ready till April and ignoring some more recent random quickly changing rumours, I'm not expecting GK104 till after that...... and we've seen people mention more like June than April/may, and I'm guessing that is in reference to GK104, not the low end.

    Because frankly what interest does someone like Kyle([h]) have in the low end, have they ever reviewed the low end?

  3. #673
    Quote Originally Posted by Drunkenmaster View Post
    The thing that just jumps out at me is, with Fermi supposedly such a huge change, another radical change for Kepler seems unlikely, and dropping hot clocks and either going many small shaders, or same bigger shaders but they do much more work per clock is, huge.
    I'd say that they've been evaluating all type of units for benefits with hotclocks ever since G80. Up to Fermi it obviously only made sense for ALUs and no other units.

    If changes on Kepler would be minor it would merely be a refresh and not a new generation. However the interdie connect was radical in Fermi and I doubt they've dropped it in Kepler.

    All the leaks are just random people making a table based on one rumour or another, nothing more or less than that.
    What else is different with any other launch? Where's the XDR2 ram on Tahiti exactly?

    I still wouldn't be surprised to see the "usual" Nvidia take on things, Gk100 being planned to be a 1024 shader part, gk104 to be 768, and potentially either drop these down purely on poor yields or like the GTX 280, maybe they'll drop a few for a smaller core to start with(and they have pap yields as well )
    That's a Fermi shrunk to 28nm. It doesn't explain how they reached over 2.5x sustainable FLOPs/Watt with double precision between Kepler Teslas and Fermi Teslas and that's a (marketing) claim out of the lion's mouth and not some funky rumor compiled by a random bunch of speculating parrotts.

    Some of those funky tables suggested even idiotically high hotclocks, but that's probably the reason why they couldn't come up with an idea to fit the former claim from NVIDIA itself. In Fermi they've decided to change the ALU to core frequency ratio exactly at 2x (unlike former architectures that had higher ratios); if NV kept the hotclocks it's more than unlikely that they bounced back to 2.5x or even higher ratios as suggested in some spots.

    That seems the "easy" and natural route, but who knows.
    Under that reasoning IHVs would just go for shrinks with higher unit amounts in between new technology generations. Why did AMD then bother for example for GCN and didn't just increase the cluster count on SI? Could it be coincidentially that IHVs have also to foresee future trends and necessities and can't just chew on the same stuff forever?

    I both won't be surprised if we see hotclocks, missing clock speed targets and a core that is too big and too difficult to get good yields on for a while. I also wouldn't be surprised if Nvidia finally realised hot clocks and aiming for massive hotclock speeds this gen wouldn't kill them power/leakage wise. At that point the question would become, how early did they realise that and how much is it a bodge job of getting anything working that they can without a hot clock, or if they saw this coming a couple years ago maybe they've both dropped them and decided to finally go with a more area efficient design.........
    No IHV can make such changes over night. You might want to check in what direction Einstein as a generation might go and how high and how early Dally's influence might have found its way into their architectures.

    We'll see, I'm still seeing rumours of mobile parts coming first, and coming in Ivybridge laptops and being ready for April-may, and we've seen Nvidia mention Ivy and kepler for 6 months actually on the record. Though they tried to spin it as Ivy delayed so the low end is just waiting on ivybridge...... a useful excuse but if it was ready months earlier(as they've tried to suggest before) why not put the low end chips in other non Ivy laptops...... exactly, its BS. Nvidia have known for a LONG time that the mobile wasn't going to be ready till April and ignoring some more recent random quickly changing rumours, I'm not expecting GK104 till after that...... and we've seen people mention more like June than April/may, and I'm guessing that is in reference to GK104, not the low end.
    When do you think smaller than GK104 cores should have arrived in partner's hands in order to be on time with any of the above timeframes your mentioning for final product releases?

    Because frankly what interest does someone like Kyle([h]) have in the low end, have they ever reviewed the low end?
    Which is an indication for what exactly?

    Quote Originally Posted by distinctively View Post
    Surely some of these execs are having affairs and are just prime for blackmail.
    Good thing that I know only one thing then, that I don't know nothing. Otherwise Lord knows what kind of suspicions would lurk left and right LOL.

  4. #674
    Quote Originally Posted by trandoanhung1991 View Post
    http://www.fudzilla.com/graphics/ite...-early-q2-2012

    That's some bold FUD/talk/smack-talk from Fudo. However, I do notice a more "neutral" tone coming from this article.
    That article is identical to the GF100 one where he said it releases in Dec 2009. So I will add +3 months and -performance. So my interpretation of that is - Big Kepler comes in June with little better performance than HD 7970 while drawing way more power.

  5. #675
    8-bit overflow
    Join Date
    Sep 2010
    Posts
    310
    Quote Originally Posted by trandoanhung1991 View Post
    And finally, he's "reaffirming" that Kepler will be much, much faster than Taihiti, according to his sources. I really scratch my head at this one. Unless he's talking about a potential GTX 670/GTX 665, I seriously doubt GK104 can win over 7970.
    He also says that "Nvidia is late to the party, but with some overclocking, the old Geforce GTX 580 still stands up well against the Radeon HD 7970" ...

    Funny I read that right after viewing the very nice OC article over at Hardocp - which basically said that (@ 2560x1600):

    (1) Stock HD 7970 is on average 15-20% faster than overclocked GTX 580.
    (2) HD 7970 with simple CCC overclock (e.g. apples-to-apples) is on average 35-40% faster than overclocked GTX 580.
    (3) HD 7970 with overclock @ raised voltage is on average 50-55% faster than overclocked GTX 580.

    And that doesn't even take into account the results @ eyefinity resolutions and upcoming driver tweaks.

    With respect to the topic at hand, that basically means that any Kepler card that's made to compete with Tahiti better be >50% faster than GTX 580 - or AMD can easily strike back with HD 7980 (a.k.a. "Tahiti unchained") ... or something along those lines.

  6. #676
    Senior Member
    Join Date
    Feb 2010
    Location
    Planet Earth, Solar System, Milky Way
    Posts
    3,525

    Lightbulb

    Wrong thread, oops, sorry.

  7. #677
    640k who needs more?
    Join Date
    Mar 2010
    Posts
    762
    It would be strange, if Kepler isn't a lift in performance compared to Fermi. It would be even stranger, if Kepler was a 'whole new architecture' so to speak. Some of you might remember, that designing Fermi took time, and cost something in the vacinity of $1B (according to NV). So, 2014 Maxwell would fit the bill of 'major change' or 'whole new' architecture. But dropping the 'hot clock' is IMO at least plausible Link

  8. #678
    Quote Originally Posted by poul View Post
    It would be strange, if Kepler isn't a lift in performance compared to Fermi. It would be even stranger, if Kepler was a 'whole new architecture' so to speak.
    Charlie didn't call Fermi "broken and unfixable" for nothing, did he?

    It's what and wheter you can build something on this architecture, where this claim is going to be fully tested.

  9. #679
    640k who needs more?
    Join Date
    Mar 2010
    Posts
    762
    Quote Originally Posted by DarthShader View Post
    Charlie didn't call Fermi "broken and unfixable" for nothing, did he?

    It's what and wheter you can build something on this architecture, where this claim is going to be fully tested.
    Was VLIW4 a 'whole new' architecture or an overhauled architecture? An 'whole new' architecture takes time and costs money. Maybe in the vacinity of $1B, (according to NVIDIA). So, its not done every year, but every 4 to 5 years. New architecture for NVIDIA is 2014 codenamed Maxwell. They simply will have to live with it until then.

  10. #680
    Quote Originally Posted by DarthShader View Post
    Charlie didn't call Fermi "broken and unfixable" for nothing, did he?
    And about everybody and their mom tried to show him by denouncing the statement, first with the fact that GTX480 got launched (screw the fact it missed some cluster and clocks), then the 'fixed' GTX580 got launched, which fixed at least that part and of course the sales of GTX460/560 were happily used too. Nobody actually ever went to the point of the architecture though and nVidia believed in the architecture, even enough to do a baselayer respin to get GTX580.

    Dropping the hot clocks itself isn't the major overhaul, let alone a whole new architecture. Time will tell what Kepler will be though.
    Gigabyte GA-A75M-D2H||AMD A8-3850||Corsair XMS3 PC3-16000(2000MHz)||Sapphire Radeon HD6670||Fractal Design Define Mini
    Life, it all of a sudden comes back to you and you have no clue who it is.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
WordPress Appliance - Powered by TurnKey Linux