From what I gather, they are claiming WAY higher gains than Haswell-Broadwell. Aren't they saying there will be 16 EUs from 4 EUs in Bay Trail? Of a higher Gen part? Isn't that a 400% performance increase claim, without some funny business going on?
Increasing unit count doesn't necessarily mean a corresponding increasing performance count - you're always bound by the available power budget, as well as other factors such as internal cache structures, etc.
Take HD 4400 vs. HD 5000 in Haswell - both share a 15W TDP but HD 5000 has double the number of EU's and other units, yet typically offers lower performance than HD 4400 as their usage is limited by the power budget.
Promisingly though if you compare Atom and Pentium Bay Trail products the lack of GPU resources means the higher TDP Pentium parts will rarely use much more power than the Atom parts and only offer a very small increase in performance, so perhaps Cherry Trail/Braswell will have a bit more scaling in the higher TDP parts...
Interesting thought. Baytrail already uses the vast majority of it's TDP for its GPU. 7.5W for that much more performance, even with the die shrink, doesn't seem enough.
That's true. Odds are the EU count is increased and the frequency is decreased, especially being a new process node and ultra-mobile part. I doubt the GPU is much faster than Bay Trail.
It's a balancing act. Ultimately it comes down to performance per watt, one method may be better than the other depending on the particular process node. (They all have differing power characteristics.)
For instance driving millions more transisters for more EU's at a lower clock may result in less power consumption than less EU's at a higher clock, whilst on a prior node that could have been in reverse.
Macbook airs have better graphics performance with HD5000 then HD4400 ultrabooks. Wish more ultrabooks came with HD5000. But you're right about the power envelope. HD5000 in 15W parts trottles sooner then HD5100 in 22W. My biggest question is, does the eDRAM use a lot of power. I don't understand why they decided to only offer eDRAM in the high power chips. The only downside with graphics in a HD5100 is the lack of bandwidth which the eDRAM addressed. Hopefully by Skylake they will offer eDRAM at a lower power envelop.
"Macbook airs have better graphics performance with HD5000 then HD4400 ultrabooks. Wish more ultrabooks came with HD5000."
Actually from personal testing experience, it doesn't. HD 4400 will outperform HD 5000 in the vast majority of workloads. Iris 5100 is not wholly bandwidth limited either - there is still a degree of TDP limitation even at 28W.
I noticed the same thing in my comment on the Broadwell story. This could be the first time an Atom has a better GPU than its mainstream contemporary. I'm also curious to be how the CPU improvements pan out and any clock increases, since the cheapest Broadwell Celeron is staying at 1.5GHz
*Trail GPUs typically run at significnatly lower clock speeds than in desktop CPUs.. and I guess also less than in entry level mobile CPUs. Sure, the fastest mobile CPUs with many GPU shaders run into power limits, but the smaller ones should clock happily.
They'll be running those at lower clocks, most likely. It is typically more efficient from a power standpoint, for GPUs, to have more units clocked at a lower speed, rather than have fewer units clocked at higher speed. The tradeoff is cost.
Still, the performance increase should be substantial -- perhaps 275-300% of Bay Trail.
Any news on Braswell and possible release dates? I liked the baytrail implementation on the Lenovo 2 11 and dell 11 3000 convertibles so hoping for braswell refreshes instead of core M to keep the costs and heat down.
Based on Intel's stated goals for Atom over the next few years, I'd be suprised if there wasn't a huge jump in performance. Look at what Core M Broadwell can manage with barely more power budget for it's GPU. Cherry Trail I believe will have two GPU setups, an 8EU and a 16EU arrangement. Or at least that was the word as of a few months ago.
My guess is lower clocked EUs, but lots more of them. I'd suspect something in the range of a 50-150% improvement over current Bay Trail for the 8EU part and 100-300% increase for the 16EU part, just depending on how they are clocked.
The GPU in Bay Trail is a little "sad" right now and Intel is shooting for a HUGE increase in GPU performance in just a ~3 year time frame, so unless they are planning on missing their mark big time, they need at least a doubling of performance every year to get them in their ball park (which I believe was an increase of 10x in GPU performance within 3 years when they said it this past summer).
Looking on the CPU side of things, if the architecture remains 100% stagnant, the top end Cherry Trail is coming with a modest 200MHz clock speed bump, again, unless Intel has changed that as of this past summer when they revealed as much as they ever have about Cherry Trail. No idea on the CPU, but again, they are talking up big CPU gains over that 3 year time frame, so I'd be suprised if we didn't see a larger than Ivy/Haswell/Broadwell IPC bump, maybe in the 15-25% range? I know that the CPU arch isn't getting a full overhaul, just a process shrink, but unless Goldmont/Willow Trail is a truely massive redesign of the CPU arch, they need to average something like 65% per generation for 3 years (because I think Intel's claim was 5x CPU and 10x GPU in 3 years), even if you factor in that some of it might be from clock speed gains, you'd still need a moderate IPC gain at a minimum every generation.
well this is certainly more exciting the the broadwell announcement. I'd love for these chips to not suck. Guess we'll see.
I have a bay trail tablet, and for the most part I'm pretty happy with it, but it could use a big gpu boost and at a moderate cpu boost. Sounds like this should be both, but we'll see.
Looking foward to seeing this in some decent device soon. Honestly, I was quite excited by Bay Trail with its performance leap over previous generation of Atom. However, the GPU is indeed trailing its CPU performance. Also, one thing I noted is that it is often used on very cheap tablets and given a very sad 1 or 2gb of ram. Considering that the ram is shared, I feel it makes sense to increase that to 4 at least to run on a 64bit OS.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
22 Comments
Back to Article
blaktron - Monday, January 5, 2015 - link
From what I gather, they are claiming WAY higher gains than Haswell-Broadwell. Aren't they saying there will be 16 EUs from 4 EUs in Bay Trail? Of a higher Gen part? Isn't that a 400% performance increase claim, without some funny business going on?http://arstechnica.com/gadgets/2015/01/intel-begin...
Thorburn - Monday, January 5, 2015 - link
Increasing unit count doesn't necessarily mean a corresponding increasing performance count - you're always bound by the available power budget, as well as other factors such as internal cache structures, etc.Take HD 4400 vs. HD 5000 in Haswell - both share a 15W TDP but HD 5000 has double the number of EU's and other units, yet typically offers lower performance than HD 4400 as their usage is limited by the power budget.
Promisingly though if you compare Atom and Pentium Bay Trail products the lack of GPU resources means the higher TDP Pentium parts will rarely use much more power than the Atom parts and only offer a very small increase in performance, so perhaps Cherry Trail/Braswell will have a bit more scaling in the higher TDP parts...
nwarawa - Monday, January 5, 2015 - link
Interesting thought. Baytrail already uses the vast majority of it's TDP for its GPU. 7.5W for that much more performance, even with the die shrink, doesn't seem enough.Samus - Tuesday, January 6, 2015 - link
That's true. Odds are the EU count is increased and the frequency is decreased, especially being a new process node and ultra-mobile part. I doubt the GPU is much faster than Bay Trail.StevoLincolnite - Tuesday, January 6, 2015 - link
It's a balancing act.Ultimately it comes down to performance per watt, one method may be better than the other depending on the particular process node. (They all have differing power characteristics.)
For instance driving millions more transisters for more EU's at a lower clock may result in less power consumption than less EU's at a higher clock, whilst on a prior node that could have been in reverse.
Thorburn - Tuesday, January 6, 2015 - link
HD 5300 at 4.5W is slower than HD 4200 at 10.5W, so yes I wouldn't get my hopes too high.aratosm - Tuesday, January 6, 2015 - link
Macbook airs have better graphics performance with HD5000 then HD4400 ultrabooks. Wish more ultrabooks came with HD5000. But you're right about the power envelope. HD5000 in 15W parts trottles sooner then HD5100 in 22W. My biggest question is, does the eDRAM use a lot of power. I don't understand why they decided to only offer eDRAM in the high power chips. The only downside with graphics in a HD5100 is the lack of bandwidth which the eDRAM addressed. Hopefully by Skylake they will offer eDRAM at a lower power envelop.Thorburn - Monday, January 12, 2015 - link
"Macbook airs have better graphics performance with HD5000 then HD4400 ultrabooks. Wish more ultrabooks came with HD5000."Actually from personal testing experience, it doesn't. HD 4400 will outperform HD 5000 in the vast majority of workloads. Iris 5100 is not wholly bandwidth limited either - there is still a degree of TDP limitation even at 28W.
nwarawa - Monday, January 5, 2015 - link
I noticed the same thing in my comment on the Broadwell story. This could be the first time an Atom has a better GPU than its mainstream contemporary. I'm also curious to be how the CPU improvements pan out and any clock increases, since the cheapest Broadwell Celeron is staying at 1.5GHzMrSpadge - Monday, January 5, 2015 - link
*Trail GPUs typically run at significnatly lower clock speeds than in desktop CPUs.. and I guess also less than in entry level mobile CPUs. Sure, the fastest mobile CPUs with many GPU shaders run into power limits, but the smaller ones should clock happily.III-V - Wednesday, February 4, 2015 - link
They'll be running those at lower clocks, most likely. It is typically more efficient from a power standpoint, for GPUs, to have more units clocked at a lower speed, rather than have fewer units clocked at higher speed. The tradeoff is cost.Still, the performance increase should be substantial -- perhaps 275-300% of Bay Trail.
III-V - Wednesday, February 4, 2015 - link
Actually, Intel themselves stated a "2x" increase, so I was just a bit on the optimistic side, as I usually am. :)Need to get better at tempering my estimations.
jb14 - Monday, January 5, 2015 - link
Any news on Braswell and possible release dates? I liked the baytrail implementation on the Lenovo 2 11 and dell 11 3000 convertibles so hoping for braswell refreshes instead of core M to keep the costs and heat down.Thorburn - Tuesday, January 6, 2015 - link
Core M shares the same TDP as the Atom parts now.azazel1024 - Monday, January 5, 2015 - link
Based on Intel's stated goals for Atom over the next few years, I'd be suprised if there wasn't a huge jump in performance. Look at what Core M Broadwell can manage with barely more power budget for it's GPU. Cherry Trail I believe will have two GPU setups, an 8EU and a 16EU arrangement. Or at least that was the word as of a few months ago.My guess is lower clocked EUs, but lots more of them. I'd suspect something in the range of a 50-150% improvement over current Bay Trail for the 8EU part and 100-300% increase for the 16EU part, just depending on how they are clocked.
The GPU in Bay Trail is a little "sad" right now and Intel is shooting for a HUGE increase in GPU performance in just a ~3 year time frame, so unless they are planning on missing their mark big time, they need at least a doubling of performance every year to get them in their ball park (which I believe was an increase of 10x in GPU performance within 3 years when they said it this past summer).
Looking on the CPU side of things, if the architecture remains 100% stagnant, the top end Cherry Trail is coming with a modest 200MHz clock speed bump, again, unless Intel has changed that as of this past summer when they revealed as much as they ever have about Cherry Trail. No idea on the CPU, but again, they are talking up big CPU gains over that 3 year time frame, so I'd be suprised if we didn't see a larger than Ivy/Haswell/Broadwell IPC bump, maybe in the 15-25% range? I know that the CPU arch isn't getting a full overhaul, just a process shrink, but unless Goldmont/Willow Trail is a truely massive redesign of the CPU arch, they need to average something like 65% per generation for 3 years (because I think Intel's claim was 5x CPU and 10x GPU in 3 years), even if you factor in that some of it might be from clock speed gains, you'd still need a moderate IPC gain at a minimum every generation.
azazel1024 - Monday, January 5, 2015 - link
Dang it, not what I meant. I meant 1.5-1.75x for the 8EU and 2-3x for the 16EU over the current Bay Trail-T.andrewaggb - Monday, January 5, 2015 - link
well this is certainly more exciting the the broadwell announcement. I'd love for these chips to not suck. Guess we'll see.I have a bay trail tablet, and for the most part I'm pretty happy with it, but it could use a big gpu boost and at a moderate cpu boost. Sounds like this should be both, but we'll see.
mkozakewich - Tuesday, January 6, 2015 - link
I can play Minecraft on my tiny HP Stream 7, so take that as you will. It sounds like Cherry Trail will be far more capable.zodiacfml - Tuesday, January 6, 2015 - link
Oh I hope they maintain the TDP, the previous Atom already has good consumption, more than 6 to 8 hours on an Asus t100 or similar.Impulses - Tuesday, January 6, 2015 - link
Now if only MS would make a non-Pro Cherry Trail Surface...watzupken - Wednesday, January 7, 2015 - link
Looking foward to seeing this in some decent device soon. Honestly, I was quite excited by Bay Trail with its performance leap over previous generation of Atom. However, the GPU is indeed trailing its CPU performance. Also, one thing I noted is that it is often used on very cheap tablets and given a very sad 1 or 2gb of ram. Considering that the ram is shared, I feel it makes sense to increase that to 4 at least to run on a 64bit OS.Teknobug - Monday, February 2, 2015 - link
Say g'bye to Raspberry Pi.