Page 5 of 5 FirstFirst ... 345
Results 41 to 46 of 46

Thread: iPad 3 Will Have Dual-Core A6 Processor Instead of Quad-Core?

  1. #41
    A report somewhere (I forgot) characterizes the ipad 3 SoC as "A5S" or "A5X", suggesting (to me) that this is more or less an A5 with beefed up GPU to handle 4 x the pixel-count.

    This would seem to support dual-core rumours tidily...


    Wonder how it will look next to Krait (with appropriate GPU)?

  2. #42
    Quote Originally Posted by charlie View Post
    Much better? Yes. Much better for the money? No. Much better than the competition? T2 has some arguments going for it, but not many. T3 is a regression in most key areas when the competition is making huge leaps. Game over.
    I'm confused at how you're defining "regression."

    Quote Originally Posted by charlie View Post
    I am not sure that is true. Samsung was quoting .29W/GB best case for their brand new DDR3L.

    http://semiaccurate.com/2012/01/31/s...am-into-a-10u/

    Lets assume they are doing a best case scenario, and the real number is about .5W/GB, and assume that the T3 controller can support the latest bleeding edge power reduction capabilities (Highly doubtful). .5W is HUGE in this arena, that is somewhere around 1/3 of the CPU's total power budget for the DDR3 chips alone, assuming 1GB on the tablet. The added power on the CPU side is not know, but .25W would not be out of line. Together, it is a horrific number for the same bandwidth that you would get for 2x LPDDR2 at a fraction of the power cost.
    Somehow I don't think that 1 channel of 32-bit DDR3 has the same power consumption characteristics as 4 64-bit channels running at a much higher base clock. DDR2 is being used in various Chinese handhelds for instance; I think a tablet (with much larger battery) can handle the much lower consuming DDR3L. Of course LPDDR2 is the only choice for phones.

    Quote Originally Posted by charlie View Post
    Nvidia did a stupid thing here. No, not supporting DDR3, but doing so instead of 2x LPDDR2. Trust me, the other SoC designers are toasting the competition's behavior, once they picked themselves off the floor from laughing. There is no real up side to this for Nvidia, even if other sites are too dumb to realize how badly they are being used when they report the PR spin.
    I'd probably trust you more if you brought some numbers showing which areas they're bandwidth limited in, as opposed to simply saying how much they screwed up and how everyone's ridiculing them. For phones that second LPDDR2 channel is a tangible power cost, and I wouldn't be surprised if at least some vendors opt out of using it. The advantage is really important for tablets, although it can at least impact 1080p HDMI out performance for phones that have it.

    Quote Originally Posted by charlie View Post
    See above. And I meant app loading time, a large part of which is due to memory bandwidth needed for app unpacking.
    Sorry, not buying it. Even a 400MHz 32-bit LPDDR2 module supports 3.2GB/s. I've talked to people who have had no problems saturating the bandwidth on a Tegra 2 eval board (unlike say, a Pandaboard, where people struggle to get a small fraction of it, but I'll leave that up to software problems). You will be bound either by the read speed of pulling in data, assuming it's not compressed at a ratio of 10x or more, and probably more importantly, the decompression code itself, which will top out at a few tens of MB/s.

    Even if you had to fill the entire RAM with the unpacked app it'd still take under a second. That's pure write operations.

    Quote Originally Posted by charlie View Post
    I haven't seen specific latency numbers, but for real tests, not PR blessed tables, I would be surprised if OMAP lost here due to TI's heritage in AV. That said, do you have any numbers/links?
    Here's one measurement for Tegra 2:

    http://www.realworldtech.com/beta/fo...24417&roomid=2

    And some for other Cortex-A8 SoCs:

    http://www.7-cpu.com/cpu/Cortex-A8.html

    And Cortex-A9 SoCs:

    http://www.7-cpu.com/cpu/Cortex-A9.html

    none's measurement is most likely end to end, while the numbers given above separate cache miss penalty from RAM fetch latency. I've also measured (end to end) latency on OMAP3530 to be around 250ns, and something similar for Freescale i.MX53. The basic impression I've been getting is that TI and Freescale have pretty poor latency while nVidia and Samsung do much better.

    I wouldn't expect a heritage in audio/video to automatically lend to an expertise in memory latency optimization since neither are latency sensitive.

    Here are some bandwidth tests, including for Tegra 2:

    http://groups.google.com/group/panda...530e8195?pli=1

    You can see that people are having a much easier time getting higher numbers on the Tegra 2 vs an OMAP4430. Whether that's due to software problems or not is uncertain, but the consensus is that hardware errata are playing a role (maybe fixed in OMAP4460).

  3. #43
    The Tegra 3 is quite an enigma. I had the chance to play with both ICS and Win8 on the T3. As far as ICS goes, it's quite responsive and then lags like crazy! On Win8, it was incredibly smooth all the way. In fairness, I only got to mess around with Win8 for just a few minutes.
    Main Rig: Apple //e with 256K extended RAM, 8-bit 65C02 overclocked @ 4.77 MHz air-cooled. ProDOS, Dual 140K Disk Drives, US Robotics 14.4K MODEM.

  4. #44
    Quote Originally Posted by Skywalker View Post
    The Tegra 3 is quite an enigma. I had the chance to play with both ICS and Win8 on the T3. As far as ICS goes, it's quite responsive and then lags like crazy! On Win8, it was incredibly smooth all the way. In fairness, I only got to mess around with Win8 for just a few minutes.
    That appears to just be Android rather than anything else. Having owned a Xoom since launch I know too well that Android will just for no reason stutter/go incredibly slow from time to time and be perfectly smooth for the rest of the time. It's incredibly irritating but I don't think any ARM CPU can run it as smooth as either Windows 8 or iOS.

  5. #45
    Administrator
    Join Date
    Feb 2009
    Posts
    7,533
    Quote Originally Posted by Guild View Post
    I have a feeling Kraits will flood the WOA tablet market.
    An unannounced version is the killer WARM part.

    -Charlie

  6. #46
    Administrator
    Join Date
    Feb 2009
    Posts
    7,533
    Quote Originally Posted by Exophase View Post
    I'm confused at how you're defining "regression."
    Bandwidth available per bandwidth consumer. Cost. Power use at load. Reliability. Errata. Availability at promised release date(s). I could go on, and I realize I am not justifying all of these, but I do have reasons I can't make public for saying all of these.

    Quote Originally Posted by Exophase View Post
    Somehow I don't think that 1 channel of 32-bit DDR3 has the same power consumption characteristics as 4 64-bit channels running at a much higher base clock. DDR2 is being used in various Chinese handhelds for instance; I think a tablet (with much larger battery) can handle the much lower consuming DDR3L. Of course LPDDR2 is the only choice for phones.
    Of course not, that is silly. DRAM idle power is mostly capacity * refresh cost per cell. That basically doesn't change across densities and chip counts. It is modified by speed a bit, and can be dropped a lot by power savings technology. Transmission power is largely dependent on width and speed, but also power savings and lane idle/powerdown can play a huge part too. The active power is also increased with speed and width on both the controller and the DRAMs. The difference between DDR3 and LPDDR2 is massive. The difference between DDR3 and DDR3L is basically voltage related.

    Quote Originally Posted by Exophase View Post
    I'd probably trust you more if you brought some numbers showing which areas they're bandwidth limited in, as opposed to simply saying how much they screwed up and how everyone's ridiculing them. For phones that second LPDDR2 channel is a tangible power cost, and I wouldn't be surprised if at least some vendors opt out of using it. The advantage is really important for tablets, although it can at least impact 1080p HDMI out performance for phones that have it.
    I can't share what the OEMs told me, but trust me when I say that the near complete lack of T3 devices on the market months after launch is due mainly to this factor. Cost is a problem too, but bandwidth lost them SO many design wins it is scary.

    Also, having a second LPDDR2 channel does not mean you have to use it, or that it has to be powered up most of the time. I have heard some talk of fairly exotic power savings that basically hard power down half of the memory to save power when not needed. Most cell phones do only implement 1 channel of LPDDR2 though.

    Quote Originally Posted by Exophase View Post
    Sorry, not buying it. Even a 400MHz 32-bit LPDDR2 module supports 3.2GB/s. I've talked to people who have had no problems saturating the bandwidth on a Tegra 2 eval board (unlike say, a Pandaboard, where people struggle to get a small fraction of it, but I'll leave that up to software problems). You will be bound either by the read speed of pulling in data, assuming it's not compressed at a ratio of 10x or more, and probably more importantly, the decompression code itself, which will top out at a few tens of MB/s.
    OK, fair enough, but I am going by what I am told but the OEMs. If you feel Tegra is great for your uses, go buy a few.

    Quote Originally Posted by Exophase View Post
    Here's one measurement for Tegra 2:

    http://www.realworldtech.com/beta/fo...24417&roomid=2

    And some for other Cortex-A8 SoCs:

    http://www.7-cpu.com/cpu/Cortex-A8.html

    And Cortex-A9 SoCs:

    http://www.7-cpu.com/cpu/Cortex-A9.html

    none's measurement is most likely end to end, while the numbers given above separate cache miss penalty from RAM fetch latency. I've also measured (end to end) latency on OMAP3530 to be around 250ns, and something similar for Freescale i.MX53. The basic impression I've been getting is that TI and Freescale have pretty poor latency while nVidia and Samsung do much better.

    I wouldn't expect a heritage in audio/video to automatically lend to an expertise in memory latency optimization since neither are latency sensitive.

    Here are some bandwidth tests, including for Tegra 2:

    http://groups.google.com/group/panda...530e8195?pli=1

    You can see that people are having a much easier time getting higher numbers on the Tegra 2 vs an OMAP4430. Whether that's due to software problems or not is uncertain, but the consensus is that hardware errata are playing a role (maybe fixed in OMAP4460).
    I don't have time now, but I'll try to read these tonight. Thanks.

    -Charlie

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
WordPress Appliance - Powered by TurnKey Linux