I guess there's no way around it given the variety of architectures, but referring to GPUs by their core counts doesn't seem very informative anymore. At least, when when talking about a 12 core Tegra 3 vs a quad core SGX543MP4 we know they are different product families and we can see the core to performance relation when comparing within families such as 8 core Tegra 2 vs 12 core Tegra 3 and dual core SGX543MP2 and quad core SGX543MP4. In this case, 8 core for the T628 and T678 mean very different things even though they are the same family, but I have little doubt smartphones using the T628 will be promoting it's 8-core moniker as much as smartphones using the T678.
You are right, but marketers don't even understand what they're selling 9 times out of 10 and they're certainly not in the business of trying to educate or inform the public. Whatever sells is it.
If you're referring to the naming of the chips themselves, then that's true, the end customer will probably never see those names. But ultimately the consumer will be buying the end product phone/tablet which, as the OP mentioned, will probably be marketed based on its core count. Marketers normally go for the more/bigger-is-better strategy when selling products as they do with MHz, RAM, etc.
I think if a company like Samsung cares about stuff like that, they rely on marketing to tech enthusiasts who are very influential with average buyers such that by the time they enter the store, someone has already told them "Buy the Galaxy S III, ignore the iPhone, One X or whatever else the sales rep tries to peddle you".
That's the only thing I can think if because in terms of hardware, there isn't much difference in the main components if the One X and the GSIII but the latter is selling far better. I doubt it's because of the 2GB of RAM, but people being steered to the phone.
Actually, this is more like the PS2 vs. DC (if the DC had a DVD player... sigh). PS2 sold well because of the prior generation, like Samsung. Samsung *won* with the GSII. Their processor advantage sold that phone to tech enthusiasts, and while the SIII doesn't have that advantage, the reputation has stayed.
Ironically Android fanboys often claim that the SIII is *faster* than the iPhone, and are more misguided than the entirely uninformed consumer. Meanwhile, the One X doesn't have that reputation, partially because of the A8-derived cores in S3-based Sensation series, the Rezound, etc. led to poor performance next to the GSII and even the Moto phones with TI processors, and slaughtered HTC's rep. At least they didn't use Tegra 2 that generation...
The t628 and t678 are identical in core count but they differ by "doubling the number of arithmetic pipelines within each core and improving the compiler and pipeline efficiency". Also by "increasing the efficiency of each core "on the same silicon with the same power draw". Previously, the t604 was only scalable up to 4 cores, and the only 8 core solution was the t658, which has a different IP. With the t628, you are using the same IP (license) of the t624, but you may double the cores. A "cheaper" 8 core GPU if you will, with almost identical performance to the t658. At least that's how I understand it.
What riddles me, though, is since their respective "better equivalents" are available, are the t604 and the t658 "dead" before being shipped in products? If you go to their website, they're promoting the t624 and t678 GPUs on the t604 and t658's pages:
They're urging their customers to "think twice" before buying their t604/t658 architectures... hmmmm, If anyone has an explanation, please do share.
For your performance difference concerns, things are a BIT clearer: The Mali t604 (4 for Quad Core) is 5x faster than the Mali 400 found in the Galaxy S2. Which means:
- t601 (single core unified shader), 4x "slower" than the t604/t624 and 15-30% faster than the Mali 400 GPU (pixel shader and compute performance) and 2-4x faster vertex shader performance (since it's their Midgard architecture). - t604/t624 (Quad core unified shader), 5x faster than the Mali 400 in pixel shader and compute performance, and probably 7-10x faster in vertex shader performance - t628 (8 cores), 2x faster than the Mali t604 - t658 (8 cores): 2x faster (pixel AND vertex shader performance) than Mali t604, and probably 4x faster in compute performance (Great for augmented reality and compute hungry applications) - t678 (8 cores): ARM's current flagship GPU and 50% faster than the Mali t658 GPU because of improved efficiency. Also, preferred over the t658?
I should read the article more thoroughly before commenting.... doh. Anand explains most of what I said on the difference in graphics to compute performance between the t628 and t678....
But like he's wondering, the question remains; what of the first generation t600 series???
The only thing that crossed my mind is this though:
Samsung is the only OEM fabbing Mali GPUs considerably ATM. Even ST-Erricson are going with PowerVR in their next lineup of NovaThor. Qualcomm's Adreno 320 didn't make the competitive "push" to force Samsung to go with the t600 series just yet, and current Android apps aren't even taking full advantage of the Mali 400 GPU.
If ARM wants to truly compete, they need 2 things:
1) Better conformity with architectural standards (Khronos). 2) EXCELLENT drivers for DirectX.
Android's echo-system isn't cutting it as of yet. Google needs to step to push Renderscript, and create something similar for graphics. Samsung's next Exynos SoC probably won't utilize any of the t600 series GPUs but the 8 core Mali t-450 (for marketing and cost purposes).
The only platform left that might truly utilize the t600 series is Windows' DirectX. Even if it was pushed for Android, I doubt it will get great Dev support. ARM needs to strike a deal with Microsoft.
Exynos 5 Dual with a dual core 1.7 Ghz Cortex A15 CPU and a Mali T604 GPU is coming out this year (in 2-3 months I think) and will compete with the S4 Pro/Adreno 320. I don't know where you got the idea that Samsung didn't want to compete with Adreno 320.
Also, ST-Ericsson chose PowerVR 600 because they thought the next-gen Mali "won't be ready on time", whatever that means, because obviously the T604 will be ready many months ahead of the PowerVR600 chip inside iPad 4.
Also from what I hear ST-Ericsson has some pretty huge delays with that chip of theirs, and probably won't be available until second half of 2013, which makes their decision to not go with Mali T600 architecture even more puzzling.
By then they'll have to compete with the quad core Tegra 4 and Exynos 4 Quad Cortex A15-based chips, and with Mali T628/678 and whatever Nvidia has in terms of GPU (the 64 core "Kepler-based" GPU).
I understand all that... Samsung is releasing the Exynos 5 Dual for the sole purpose of supporting a higher resolution on a tablet (higher than FHD). It's becoming a race of who has the faster GPU when it comes to Android, while NONE of the applications are hitting the surface when it comes to fully taking advantage of it. Even Asus's Full HD tablets are running on Tegra3.
None of the OEMs are announcing any Android tablets with the Adreno 320. Those who are announcing them have Windows RT in mind. Which only means one thing; they can really be utilized under Windows, not Android.
It's a complete waste. The t600 series needs really good dev support to really shine. Devices running the t604 will be extremely limited compared to Android as a whole...
That's one part of the rant. The other part revolves around the release date of the second generation T600 series which makes the new Exynos' GPU obsolete (by ARM's standards) before it was even released. What's even more frustrating is that the "second generation" is nothing but a refinement of the first one with added support for ASTC...
I'm still waiting for this "console class performance" they keep talking about. The Xbox 360 when optimised very well has the performance roughly of a 8800 GT or 9600 GT if you want to be conservative. These mobile GPUs aren't near that level of performance yet. Marketing, all about stretching the truth.
I saw some commentary a while back saying these GPUs would probably reach X360 levels some time in 2014. I'd probably guess a bit longer than that, but it's to be expected. They're both very low power and physically tiny parts. Yeah, the hype is just hype as always. Mobile will never be equal to desktop performance, it just makes sense. As it stands, if we reach X360 performance in 2014, that will be about a 10 year gap. The gap may close a bit with the huge money going into mobile development, but I doubt you'll ever see less than a 5 year gap between desktop and mobile parts.
The gap is more like 7 years. The "console class" performance should come next year with the arrival of Mali T628 and T658, which should have close to 300 gigaflops, more than Xbox360 and PS3. Also these new chips have support for OpenGL ES 3.0, while PS3 only supports OpenGL ES 2.0 (slightly modified), so the games should look more visually impressive, too, if the devs actually make them specifically for those chips, like they do with Tegra optimized games.
LOL!! No!! Where do you get that information from? An Xbox 360 has roughly half the power of a 8800GT. Because, it's pretty much an ATI X1800 which fell short of the 7800GTX, which is not much more than half as powerful as a 8800GT.
And when talking console class, why do you want to go high-er end console that XBox 360 is? If you take the case of Nintendo Wii, that has roughly half the power of a Xbox 360, and phones are slowly getting to that level. Of course the marketing guys are exaggerating to sell their products but they aren't off by as much as you believe.
As I said when optimised well (Crysis 2, Gears of War 3, FF XIII-2, etc.). An ATI X1800 you say? I don't think so, the Xenos has 48 unified R520 shaders, 8 ROPs, 16 TUs, and an 10MB eDRAM chip that when utilised well gives the Xbox 360 a big advantage over the X1800 or 8600 GT. Also take a look at the minimum requirements for a lot of multi-platform games out, Crysis 2 for example needs an 8800 GT at least for low settings and from what I remember one of the devs said that the lowest settings are about console quality. Now the 8800 GT could most likely run at higher resolutions and is why I say the 9600 GT is probably a better comparison.
8800GT ?! I was running an 8600 GT in my main gaming PC until 6 months ago and it always outperformed my Xbox 360 in both resolution and detail. I'm not dogging the 360; it had great graphics when it debuted, but it was no where near the 8800 GT level.
oops - I need to correct myself. The 8600 is still in my "office" computer. The gaming rig was a 260GTX (which died). The 8600 GT is much more comparable to the 360, but I would still say it consistently puts up higher resolution and more details than the 360 on the same game. (e.g.- Gears of War, Oblivion)
Hmmm.Console Class performance....Great expectation... Hope the industry set great goals. It 'll help the consumers. Consoles and GPUs uses dedicated memory called GDDR5. Its throughput is beastly power. GPU and GDDR5 communicate each other and process faster. More over Console uses 600 pound gorilla processing power( Cell processor from IBM). So it may not be possible. ( The same analogy applies to CMOS sensors ( camera's).
Mobile uses main memory (LPDDR2, LPDDR3) for their processing. I assume APPLE 5X uses wide memory bandwidth by using multiple channel. It may be industry norm.
I think Mobile GPU's uses dedicated ASIC rather than General purpose processor. It will lead reduction of power but dedicated enginees. Somebody please confirm...
Anand, you say the article that the GPU compute/Graphics level chips will be targeted at high end smartphones. I was under the impression that GPU compute was most useful in workstations. What kind of smartphone software is going to see benefits from GPU compute as opposed to optimizing for graphics alone? Sorry if this seems like a basic question.
I think photo and video manipulation could be accelerated with GPU compute even faster than having multiple CPU cores because of how parallel-friendly the work is. And the best part is that if an OS vendor accelerates core APIs with OpenCL, then every application should get a speedboost for free.
I'm surprised by the fact that you're surprised that these new chips are "announced" before the 1st gen launches.
This sort of thing is common for ARM. Remember when TI announced the dual core Cortex A15 OMAP 5 in 2010? It was even before Nvidia announced the dual core Cortex A9 Tegra 2 if I'm not mistaken. Also Nvidia used to "demo" their Tegra chips a whole year before it came to market as well. Didn't they unveil Tegra 3 at MWC in mid February 2011, just 2 weeks before the Tegra 2 Xoom came out?
So yeah, not sure why this surprises you so much. If manufacturers wanted Mali T604 they would have long licensed it by now, as it should already be ready for shipping, and Samsung has already done that. This is to get manufacturers ready to license the "next-gen" GPU's, because I'm sure you know it takes quite a while from the product's project to shipping it.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
25 Comments
Back to Article
ltcommanderdata - Monday, August 6, 2012 - link
I guess there's no way around it given the variety of architectures, but referring to GPUs by their core counts doesn't seem very informative anymore. At least, when when talking about a 12 core Tegra 3 vs a quad core SGX543MP4 we know they are different product families and we can see the core to performance relation when comparing within families such as 8 core Tegra 2 vs 12 core Tegra 3 and dual core SGX543MP2 and quad core SGX543MP4. In this case, 8 core for the T628 and T678 mean very different things even though they are the same family, but I have little doubt smartphones using the T628 will be promoting it's 8-core moniker as much as smartphones using the T678.augiem - Monday, August 6, 2012 - link
You are right, but marketers don't even understand what they're selling 9 times out of 10 and they're certainly not in the business of trying to educate or inform the public. Whatever sells is it.lilmoe - Tuesday, August 7, 2012 - link
I don't believe this is a case of marketing... They're customers are OEMs (Samsung and others), not end consumers.....augiem - Wednesday, August 8, 2012 - link
If you're referring to the naming of the chips themselves, then that's true, the end customer will probably never see those names. But ultimately the consumer will be buying the end product phone/tablet which, as the OP mentioned, will probably be marketed based on its core count. Marketers normally go for the more/bigger-is-better strategy when selling products as they do with MHz, RAM, etc.dagamer34 - Friday, August 10, 2012 - link
I think if a company like Samsung cares about stuff like that, they rely on marketing to tech enthusiasts who are very influential with average buyers such that by the time they enter the store, someone has already told them "Buy the Galaxy S III, ignore the iPhone, One X or whatever else the sales rep tries to peddle you".That's the only thing I can think if because in terms of hardware, there isn't much difference in the main components if the One X and the GSIII but the latter is selling far better. I doubt it's because of the 2GB of RAM, but people being steered to the phone.
lmcd - Friday, December 7, 2012 - link
Actually, this is more like the PS2 vs. DC (if the DC had a DVD player... sigh). PS2 sold well because of the prior generation, like Samsung. Samsung *won* with the GSII. Their processor advantage sold that phone to tech enthusiasts, and while the SIII doesn't have that advantage, the reputation has stayed.Ironically Android fanboys often claim that the SIII is *faster* than the iPhone, and are more misguided than the entirely uninformed consumer. Meanwhile, the One X doesn't have that reputation, partially because of the A8-derived cores in S3-based Sensation series, the Rezound, etc. led to poor performance next to the GSII and even the Moto phones with TI processors, and slaughtered HTC's rep. At least they didn't use Tegra 2 that generation...
lilmoe - Tuesday, August 7, 2012 - link
The t628 and t678 are identical in core count but they differ by "doubling the number of arithmetic pipelines within each core and improving the compiler and pipeline efficiency". Also by "increasing the efficiency of each core "on the same silicon with the same power draw". Previously, the t604 was only scalable up to 4 cores, and the only 8 core solution was the t658, which has a different IP. With the t628, you are using the same IP (license) of the t624, but you may double the cores. A "cheaper" 8 core GPU if you will, with almost identical performance to the t658. At least that's how I understand it.What riddles me, though, is since their respective "better equivalents" are available, are the t604 and the t658 "dead" before being shipped in products? If you go to their website, they're promoting the t624 and t678 GPUs on the t604 and t658's pages:
http://www.arm.com/products/multimedia/mali-graphi...
http://www.arm.com/products/multimedia/mali-graphi...
They're urging their customers to "think twice" before buying their t604/t658 architectures... hmmmm, If anyone has an explanation, please do share.
For your performance difference concerns, things are a BIT clearer:
The Mali t604 (4 for Quad Core) is 5x faster than the Mali 400 found in the Galaxy S2. Which means:
- t601 (single core unified shader), 4x "slower" than the t604/t624 and 15-30% faster than the Mali 400 GPU (pixel shader and compute performance) and 2-4x faster vertex shader performance (since it's their Midgard architecture).
- t604/t624 (Quad core unified shader), 5x faster than the Mali 400 in pixel shader and compute performance, and probably 7-10x faster in vertex shader performance
- t628 (8 cores), 2x faster than the Mali t604
- t658 (8 cores): 2x faster (pixel AND vertex shader performance) than Mali t604, and probably 4x faster in compute performance (Great for augmented reality and compute hungry applications)
- t678 (8 cores): ARM's current flagship GPU and 50% faster than the Mali t658 GPU because of improved efficiency. Also, preferred over the t658?
Hope this helps...
lilmoe - Tuesday, August 7, 2012 - link
I should read the article more thoroughly before commenting.... doh. Anand explains most of what I said on the difference in graphics to compute performance between the t628 and t678....But like he's wondering, the question remains; what of the first generation t600 series???
The only thing that crossed my mind is this though:
Samsung is the only OEM fabbing Mali GPUs considerably ATM. Even ST-Erricson are going with PowerVR in their next lineup of NovaThor. Qualcomm's Adreno 320 didn't make the competitive "push" to force Samsung to go with the t600 series just yet, and current Android apps aren't even taking full advantage of the Mali 400 GPU.
If ARM wants to truly compete, they need 2 things:
1) Better conformity with architectural standards (Khronos).
2) EXCELLENT drivers for DirectX.
Android's echo-system isn't cutting it as of yet. Google needs to step to push Renderscript, and create something similar for graphics. Samsung's next Exynos SoC probably won't utilize any of the t600 series GPUs but the 8 core Mali t-450 (for marketing and cost purposes).
The only platform left that might truly utilize the t600 series is Windows' DirectX. Even if it was pushed for Android, I doubt it will get great Dev support. ARM needs to strike a deal with Microsoft.
dagamer34 - Friday, August 10, 2012 - link
You do know the Exynos 5250 uses the ARM Mali T604 right?Lucian Armasu - Friday, August 10, 2012 - link
Exynos 5 Dual with a dual core 1.7 Ghz Cortex A15 CPU and a Mali T604 GPU is coming out this year (in 2-3 months I think) and will compete with the S4 Pro/Adreno 320. I don't know where you got the idea that Samsung didn't want to compete with Adreno 320.Also, ST-Ericsson chose PowerVR 600 because they thought the next-gen Mali "won't be ready on time", whatever that means, because obviously the T604 will be ready many months ahead of the PowerVR600 chip inside iPad 4.
Also from what I hear ST-Ericsson has some pretty huge delays with that chip of theirs, and probably won't be available until second half of 2013, which makes their decision to not go with Mali T600 architecture even more puzzling.
By then they'll have to compete with the quad core Tegra 4 and Exynos 4 Quad Cortex A15-based chips, and with Mali T628/678 and whatever Nvidia has in terms of GPU (the 64 core "Kepler-based" GPU).
lilmoe - Saturday, August 11, 2012 - link
I understand all that... Samsung is releasing the Exynos 5 Dual for the sole purpose of supporting a higher resolution on a tablet (higher than FHD). It's becoming a race of who has the faster GPU when it comes to Android, while NONE of the applications are hitting the surface when it comes to fully taking advantage of it. Even Asus's Full HD tablets are running on Tegra3.None of the OEMs are announcing any Android tablets with the Adreno 320. Those who are announcing them have Windows RT in mind. Which only means one thing; they can really be utilized under Windows, not Android.
It's a complete waste. The t600 series needs really good dev support to really shine. Devices running the t604 will be extremely limited compared to Android as a whole...
That's one part of the rant. The other part revolves around the release date of the second generation T600 series which makes the new Exynos' GPU obsolete (by ARM's standards) before it was even released. What's even more frustrating is that the "second generation" is nothing but a refinement of the first one with added support for ASTC...
darckhart - Monday, August 6, 2012 - link
http://terminator.wikia.com/wiki/Series_600whooleo - Monday, August 6, 2012 - link
I'm still waiting for this "console class performance" they keep talking about. The Xbox 360 when optimised very well has the performance roughly of a 8800 GT or 9600 GT if you want to be conservative. These mobile GPUs aren't near that level of performance yet. Marketing, all about stretching the truth.augiem - Monday, August 6, 2012 - link
I saw some commentary a while back saying these GPUs would probably reach X360 levels some time in 2014. I'd probably guess a bit longer than that, but it's to be expected. They're both very low power and physically tiny parts. Yeah, the hype is just hype as always. Mobile will never be equal to desktop performance, it just makes sense. As it stands, if we reach X360 performance in 2014, that will be about a 10 year gap. The gap may close a bit with the huge money going into mobile development, but I doubt you'll ever see less than a 5 year gap between desktop and mobile parts.augiem - Monday, August 6, 2012 - link
Oops, X360 came out in 2005, so 9 year gap, not 10.whooleo - Monday, August 6, 2012 - link
I agree but I just wish we could have a solid comparison between desktop and mobile parts.Lucian Armasu - Friday, August 10, 2012 - link
The gap is more like 7 years. The "console class" performance should come next year with the arrival of Mali T628 and T658, which should have close to 300 gigaflops, more than Xbox360 and PS3. Also these new chips have support for OpenGL ES 3.0, while PS3 only supports OpenGL ES 2.0 (slightly modified), so the games should look more visually impressive, too, if the devs actually make them specifically for those chips, like they do with Tegra optimized games.dishayu - Tuesday, August 7, 2012 - link
LOL!! No!! Where do you get that information from? An Xbox 360 has roughly half the power of a 8800GT. Because, it's pretty much an ATI X1800 which fell short of the 7800GTX, which is not much more than half as powerful as a 8800GT.And when talking console class, why do you want to go high-er end console that XBox 360 is? If you take the case of Nintendo Wii, that has roughly half the power of a Xbox 360, and phones are slowly getting to that level. Of course the marketing guys are exaggerating to sell their products but they aren't off by as much as you believe.
whooleo - Tuesday, August 7, 2012 - link
As I said when optimised well (Crysis 2, Gears of War 3, FF XIII-2, etc.). An ATI X1800 you say? I don't think so, the Xenos has 48 unified R520 shaders, 8 ROPs, 16 TUs, and an 10MB eDRAM chip that when utilised well gives the Xbox 360 a big advantage over the X1800 or 8600 GT. Also take a look at the minimum requirements for a lot of multi-platform games out, Crysis 2 for example needs an 8800 GT at least for low settings and from what I remember one of the devs said that the lowest settings are about console quality. Now the 8800 GT could most likely run at higher resolutions and is why I say the 9600 GT is probably a better comparison.steller2k - Tuesday, August 7, 2012 - link
8800GT ?! I was running an 8600 GT in my main gaming PC until 6 months ago and it always outperformed my Xbox 360 in both resolution and detail. I'm not dogging the 360; it had great graphics when it debuted, but it was no where near the 8800 GT level.steller2k - Tuesday, August 7, 2012 - link
oops - I need to correct myself. The 8600 is still in my "office" computer. The gaming rig was a 260GTX (which died). The 8600 GT is much more comparable to the 360, but I would still say it consistently puts up higher resolution and more details than the 360 on the same game. (e.g.- Gears of War, Oblivion)vasanthakumar - Wednesday, August 15, 2012 - link
Hmmm.Console Class performance....Great expectation... Hope the industry set great goals. It 'll help the consumers.Consoles and GPUs uses dedicated memory called GDDR5. Its throughput is beastly power. GPU and GDDR5 communicate each other and process faster. More over Console uses 600 pound gorilla processing power( Cell processor from IBM). So it may not be possible. ( The same analogy applies to CMOS sensors ( camera's).
Mobile uses main memory (LPDDR2, LPDDR3) for their processing. I assume APPLE 5X uses wide memory bandwidth by using multiple channel. It may be industry norm.
I think Mobile GPU's uses dedicated ASIC rather than General purpose processor.
It will lead reduction of power but dedicated enginees. Somebody please confirm...
softdrinkviking - Tuesday, August 7, 2012 - link
Anand, you say the article that the GPU compute/Graphics level chips will be targeted at high end smartphones.I was under the impression that GPU compute was most useful in workstations. What kind of smartphone software is going to see benefits from GPU compute as opposed to optimizing for graphics alone?
Sorry if this seems like a basic question.
dagamer34 - Friday, August 10, 2012 - link
I think photo and video manipulation could be accelerated with GPU compute even faster than having multiple CPU cores because of how parallel-friendly the work is. And the best part is that if an OS vendor accelerates core APIs with OpenCL, then every application should get a speedboost for free.Lucian Armasu - Friday, August 10, 2012 - link
I'm surprised by the fact that you're surprised that these new chips are "announced" before the 1st gen launches.This sort of thing is common for ARM. Remember when TI announced the dual core Cortex A15 OMAP 5 in 2010? It was even before Nvidia announced the dual core Cortex A9 Tegra 2 if I'm not mistaken. Also Nvidia used to "demo" their Tegra chips a whole year before it came to market as well. Didn't they unveil Tegra 3 at MWC in mid February 2011, just 2 weeks before the Tegra 2 Xoom came out?
So yeah, not sure why this surprises you so much. If manufacturers wanted Mali T604 they would have long licensed it by now, as it should already be ready for shipping, and Samsung has already done that. This is to get manufacturers ready to license the "next-gen" GPU's, because I'm sure you know it takes quite a while from the product's project to shipping it.