Not exactly an announcement I was expecting to see, but quite interesting. A bit ballsy, but that's pretty much what they need right now. We'll see how it shapes up in actual products...
This if's & but's in the industry is soo irritating
if it releases on time it will compete well according to its release schedule we might be at the ending cycle of Mali G77 & introduction of G78 So the performance gains are almost negligible
Companies overpromising & under delivering is norm COUGH *INTEL* Seriously until the GPU hits the market I'm seriously not interested in speculation I'm over speculation need result's to even be excited about this Until it is implemented in real world the idea remains a big cloud of doubt Cause many companies fail to stick to their release schedule
Intel has real GPU products currently, they are just integrated with Intel CPUs. I definitely wouldn't say Intel is 'behind schedule' with Xe either. If you read between the lines, they are shipping 7nm products in 2021, and one of these products is a high performance GPU. As a matter of fact, despite what Intel would have you believe, I suspect that they'll keep their 10nm launches to a minimum and jump straight to 7nm. It also would not surprise me if there is a shake-up in Intel's future that causes their fab business to get spun off into it's own entity to ensure that stuff like this doesn't happen again. That last part is pure speculation though.
I'd be rather curious to see how an Intel/AMD/Nvidia GPU stacks up against the Imagination one.
Also note that apparently they aren't shipping open source drivers, which right away is going to create issues for them. NVIDIA is one of the few companies that chose this route, and thus far it's caused nothing but problems for both NVIDIA and the community at large.
" they are shipping 7nm products in 2021, " yea right,, ill believe that, when it actually happens.. " causes their fab business to get spun off into it's own entity" i doubt that would happen. 10nm is VERY late, because they tried to do to much at once, or there is another reason, who knows.. but to spin it off??? not likely
I wonder which hurt Intel more: their over-ambitious process engineers or their greedy management. Either way, while they were sputtering and cavitating, the competition blew right by them.
Their once-unassailable lead was outmatched by their own hubris.
Intel's 7nm node is apparently going to be slightly denser than TSMC's 5nm node (N5), *not* their 3nm node (N3). I have read about numbers in the range of ~185 million transistors (MTr) per mm^2 for TSMC's N5 and ~200 MTr/mm^2 for Intel's 7nm node. TSMC's N3, in turn, will be 255+ MTr/mm^2. Beside a quite higher density TSMC will switch to GAA-FETs at 3nm, so there can be no direct comparison anyway. In any case Intel will need to deliver 7nm first, and I strongly doubt they will manage to completely master EUV manufacturing of multiple layers by 2021.
By the way, Intel's 10nm is not denser than TSMC's N7 DUV. Intel has developed three 10nm variants, for low, mid and high density. Only the high density (and highest performance) 10nm variant is slightly denser than TSMC's N7 DUV and even that was already outclassed in density by TSMC's N7 EUV (7nm+).
To my knowledge Intel employs the mid density 10nm variant for the mobile Ice Lake parts they just released, so they haven't released and aren't about to release any 10nm SoC, CPU or GPU that is denser than even TSMC's vanilla (DUV) N7. Their sole high density (100+ MTr/mm^2) 10nm parts are probably their newest Agilex FPGAs.
Intel desperately needs a new x86 architecture--not just a smaller production process with good yields. Process alone isn't going to help them much, imo.
Intel needs a non-x86 architecture which is better than ARM and RISC-V. Which is of course totally possible, and probably only Intel can deliver all the way (from technical to marketing perspectives) giving them competitive advantage.
Doesn't really matter how it stacks up. They don't do consumer products. I've been asking them why, when they released 7 series. They said - contact our partners for that questions... Yeah, what partner? That's why they went bankrupt. Not because of loosing Apple, but because of absence on consumer market
Huh? They're just like ARM, except they only do GPUs. ...until they bought MIPS, which turned out badly.
But, I mean, would you say ARM is a failure because they don't do consumer products? I wouldn't.
There are other non-failed companies that only sell IP, such as Tensilica, CEVA, and I'm guessing many others that sell things like PCIe interfaces, DRAM controllers, modems, etc.
ARM doesn't have much of competition in mobile market, so they can easily rely on wholesale. GPUs is completely different. There is a huge competition and relying on single partner was a deadly mistake.
I bet you this GPU IS the Intel one, and Furian is the Ice Lake one. 64EU's at 1TFLOP = Ice Lake, 1GHz. Furian is 1/2 the speed of the A series.
Intel's GPU is Xe. thisi s A - XE, XT, XM But the base of 1/16 is called..... XE. Add to that this https://news.synopsys.com/2016-03-31-Intel-Custom-... "certified on Intel 10nm PowerVR GT7200" Who's buying this on Intel's process, and why is it ceritifed on IMGtech's GPU, if its not their integrated graphics under a different name?
Are they OK? I thought they were gonna shutdown after Apple dropped them as a vendor. After the canyon bridge buyout, do they have enough cash to keep operating?
If the GPU architecture delivers a even some (most?) of what they're promising I could see them being bought for the IP and the A-series architecture coming to life in a future SoC brought to you by a different company.
Yeah...I vaguely remember you vlad, you were singing praises for the Soviet Union and claiming that China is the best everywhere and that its government takes care of every aspect of your life from housing to employment. "Chinese hater" Yeah...common attempts of brainwashed drones to discredit me. FYI I am anti-Huawei+anti-Emperor Xi>anti-Party!=anti-China There's a big distinction there, unless you're so ignorant you could oversimplify that as equal.
I only see you poisoning the well with fiction. I quote multiple sources and why two other trolls scurried off after astroturfing for Huawei again. https://www.anandtech.com/show/15099/the-huawei-ma... If you're so righteous man up and face me, lace your arguments with insults I don't care but if all you have is nothing but empty insults then that only speaks to your status as a blind drone.
From when is it completely different, where is it different? A source would be appreciated. Now it might be different, but they're strangely forthcoming regarding the current architecture compared to past NPUs, there's not many points of reference, and Huawei doesn't deserve benefit of the doubt.
No, on the contrary it's reason that Cambricon has to let them if they do. Note the recent 251 incident in which the Party went all-out for 3 days censoring everything trying to suppress the incident, but ultimately failed and they pulled back, and now Huawei for once shows its true colors to the people.
As mentioned by the poster below, I think they are likely to be bought rather than doing well.
If you look at the High to Mid End Smartphone, they are dominated by Apple, Samsung, Huawei all using their own Silicon or Qualcomm chip. Which leaves the low end with Mediatek.
Now the low end market only cares about cost, so ARM is actually a better choice due to IP bundling listening.
So I am not too optimistic, and it is also worth mentioning. For anyone who was old enough to remember what the Golden Era of GPU, there were always a new design that claims to be better on paper, and what happened?
Drivers - It is the software. The single biggest unmentioned roadblock to GPU computing. How to efficiently use the hardware is the key. S3, Matrox... and lots of others, remember those?
If I reading correctly this design put even more importance to drivers.
Thats definitely true but the Vulkan API actually takes away a lot of the driver aspect from the equation. Thats the reasons it exists on so many platforms, because it doesn't depend on the os quite so much
Huawei uses Mali GPUs, right? So, once they drop ARM CPU cores and go with a Chinese RISC-V core, they're gonna need a different GPU supplier. Hence, Imagination.
In fact, had Apple not dropped Imagination, I wonder if Verisilicon wouldn't still be pumping Vivante's IP.
Matrox is still around. Not doing anything on the 3D chip front, but still using its own ASICs for 2D, far as I understand. Using AMD for 3D.
VIA, who bought S3, are still around, but I don't know if its newer CPUs include custom made GPUs.
> there were always a new design that claims to be better on paper, and what happened?
Most of them were also better in practice. Though most of them had drawback. It was quite an interesting time. APIs were quite a problem then, and the limited amount of stuff one could do in hardware, but it definitely was an era of growth.
Same as Nvidia AMD Apple ARM and almost all other fabless manufacturers "bring it out". Actual manufacturing is one part of story of creating and bringing processor to life.
Actually, like *none* of those! Those guys all sell chips.
Imagination sells IP to other companies, which bundle it into their SoCs. However, Imagination still needs to work with fabs to make their design compatible with the process nodes customers are likely to want to use.
Nice article, but need to see products to see it working in practice. So how about that RDNA architecture article? There are a few out there now, working off of AMD's material, but it would be nice to see AT's perspective on how much it resets the competition with Nvidia, and what we could expect Samsung will be able to come up with out of it on mobile.
Andrei Frumusanu was working for them . So maybe he has more info on how this is going to develop .
The only downside for ImgTec is that they are depending on CPU vendors. So if they cannot sell this design to anyone ....
They tried with MIPS but for whatever reason MIPS lost traction . Most probably they wer eunable to sell the design .
Please understand that ImgTec is a very very small company that is fighting in fact with ARM . They are not Mediatek nor Qualcomm . In this market there is a lot of completion.
We have : Vivante in the lowend. Broad iMM has Videocore , ARM sells MALI together with Cortex designs. So how can ImgTec survive ?
I see the only option would have been MIPS + PowerVR or to be taken over by a company like Mediatek . I am still wondering why Intel did not buy them for the cheap or Mediatek .
I left back in November 2017 and avoided coverage till now due to any conflict of interest. The A-Series is beyond the horizon of future knowledge I had from back then so it was new to me, I don't have any more info beyond my estimates that I wrote.
Vivante is effectively dead and so is the Videocore lineup, the best case scenario here is a 50/50 split with Arm. The CPU I thing I don't think it's a limitation as long as the GPU in fact does deliver on competitive PPA.
I wouldn't exactly say Unisoc adoption of it would have a small impact (tho they are still recovering from bad Intel's influence) nor would I write of possibility of HiSilicone adoption (more so as they are keen on ARM for US ban compliance & after all Chinese IPO owns Imagination now). Actually this is going so far right now that RISC V foundation is moving out of US to neutral grounds to ensure that same fate that struck the ARM cannot happen to them.
Uncertainties about ImgTech's claims and promises aside, I am wondering who is supposed to be the customer. Apple is developing their own GPUs now. Samsung is going with AMD's RDNA. Qualcomm has their Adreno GPU. HiSilicon is using ARM's Mali. I fear that this will be a very niche product unless it absolutely dominates all other solutions.
If they remain independent, I think it'll be anyone who wants something better than Mali or Intel that don't have their own GPU or haven't partnered up, so with Intel also designing their own GPU cores I guess the main customer would be Mediatek, and a handful of other even smaller licensees like Broadcom for things like its Raspberry Pi SoC, NXP, STM etc.
When compared to CPU designs which are becoming increasingly commoditized by ARM's freely licensed and very good SIP cores (with only nVidia and Apple doing their own custom cores in volume going forward), efficient GPU cores continue to be highly specialized and a well sought after technology. Imagination would also be a layup acquisition for any company besides Apple, AMD, or Qualcomm, so I could Intel, Samsung or ARM buying them in the future.
Amazon made a big deal about GPU support and AI inference in their Graviton 2 announcement. They might be an unexpected client? (For that to work, however, IMG might have to be more flexible in terms of being willing to scale up/drop functionality to match AMZ's needs. They were apparently unwilling to be that flexible for Apple... But hey, near death experience can sometimes teach...)
I'm sure Amazon is just talking about Nvidia and possibly AMD. Nvidia is officially supporting their software stack on ARM, and AMD's is opensource and could be recompiled for ARM (hey, it works on POWER!).
> I fear that this will be a very niche product unless it absolutely dominates all other solutions.
At least from the description in the article, it seems to dominate Mali. Even next gen Mali. I don't expect Apple or Qualcomm to move. Samsung I think would be flexible. It's impossible to say how well RDNA fits all price points or when it will arrive, so ImgTech could find a place there. And with the A series supposedly much better than Mali in performance per silicon, I don't think that HiSilicon using it is totally out of the question.
No reason HiSilicon can't change their minds if there's a compelling reason. PPA advantages directly translate into cost savings, which is very compelling indeed.
MediaTek are probably going to be the biggest customer, though.
I know that they don’t actually compete, since Apple will never offer its design for licensing, but I still think it’s interesting to compare them.
Let’s take the numbers from the Huawei Mate 30 Pro review and compare making some assumptions.
Andrei says: “The comparison implementation here would be an AXT-16-512 implementation running at slightly lower than nominal clock and voltage (in order to match the performance).”
Let’s assume the AXT-16-512 is underclocked by 10% to get to the same performance as the Exynos 9820 and Snapdragon 855. Let’s also assume that an AXT-32-1024 is exactly double the performance of the AXT-16-512.
So, a nominally clocked AXT-16-512 would have 110% the performance of the Snapdragon 855 and Exynos 9820. Double that, and you get 220% the performance, for the AXT-32-1024.
Looking at the Huawei review, here are the numbers:
GFXBench Aztec Ruins High
Exynos 9820 and Snapdragon 855: ~16fps —> AXT-32-1024: 16fps + 120% = 35,2fps Apple A13: 34fps
GFXBench Aztec Ruins Normal
Exynos 9820 and Snapdragon 855: ~40fps —> AXT-32-1024: 40fps + 120% = 88fps Apple A13: 91fps
GFXBench Manhattan 3.1
Exynos 9820 and Snapdragon 855: ~69,5fps —> AXT-32-1024: 69,5fps + 120% = 153fps Apple A13: 123,5fps
GFXBench T-Rex
Exynos 9820 and Snapdragon 855: ~167fps —> AXT-32-1024: 167fps + 120% = 367fps Apple A13: 329fps
It seems that at least on performance (with generous assumptions), if the new architecture fulfils all promises, it would be competitive, even slightly better than the Apple A13. The problem is that it won’t compete with the A13, but the A14...
How did we get to Apple dominating GPUs too, so fast?
they're totally unafraid to spend as much die space as they need to get their performance scaling. look at a history of Ax die sizes and you'll see they're all over the place
Agreed. It's their vertical integration at work - they're the only company prepared to spend that much die area on performance because they're the only company besides Samsung that can guarantee to sell all every chip they make in a high-end, high-margin device.
"Apple is the biggest example of what a toxic system capitalism can become. " Clear sign of a hater, vlad. "Huawei is the biggest example of what a toxic system state capitalism-cum-corrupt monarchy can become. " It could direct authorities to jail an individual for 251 days with false testimonies only to be proven innocent and compesated with a recording he kept, and those who lied under oath are never held accountable. Huawei could frame somebody, to be jailed using the state apparatus supported by taxpayers, to be compensated using tax money when proven innocent, without expending a single cent from its pocket or giving so much as single apology when exposed. Yeah that's so much better than Apple.
1. Huawei's HR lead a few employees to lie under oath to start the investigation against him.
2. Authorities had the choice between detaining him and not detaining him, all they had to go on was Huawei's testimonies, they detained him siding with Huawei despite circumstantial evidence that the accusations were likely false.
3. He was investigated due to another false accusation from Huawei a few months in for an extension on his jail time.
4. Another employee was jailed under similar circumstances but gave in and wrote a confession under Huawei's promise no to press charges, which Huawei immediately seized, and brought to court.
5. He only discovered the reason to jail him when he met his lawyer appointed by his wife, which was already months into his effective sentence, only then did he disclose that he had a recording of his discussion with the company regarding compensations(and multiple backups, some of which survived police search during his arrest), which proved his innocence.
6. Upon procecutors terminating investigations on revelation from the recording and releasing him with compensation from the state, Huawei immediately modified their testimony.
7. There was never an apology nor compensation from Huawei for framing Li, not in an official capacity, not by the employees and the HR who gave the false testimonies, and the individuals who lied under oath were never prosecuted nor even investigated.
8. In the first 2-3 days of the incident there was intense censorship, to a scale probably unimaginable by an outsider, but as the Party realized this could not be suppressed, which brings us to where we are now. They turned to attempting to dictate the public discourse with censored reports and obfuscated details, and encouraging the spread of effectively irrelevant content defending Huawei from ideological and emotional standpoints.
Do not forget there is also the AXT-48-1536 for premium mobile that should go even faster than the 1024 and therefore easily compete with the future A14
I think that’s for tablets, and the AXT-64-2048 is just a possible option. If you want to consider the AXT-48-1536, you’d have to compare it to the A12X and whatever comes next.
The A series will be in devices in the second half of 2020, there’s absolutely no way that the B series will be available to customers in 2020. The B series will be available in 2021, when it will compete with the A15.
By customers I meant SoC manufacturers, like MediaTek and phones using B-series SocS could be made available before A15 since Apple releases new phones in Q4 usually.
Even if we were to be generous in our assumptions, the soonest we’ll see any device with a B-series would be Q3 2021, so at most a few months before the A15. The competition would still be the A15, not the A14. Even more importantly, there’s almost no smartphone that comes out in Q3, apart from some in August and September, which is exactly the time when the A15 will be out too.
Yeah, but he asked and they denied it. They should've been like "hey... yeah! yeah, that's exactly right!", except the question was probably fielded by an ignorant marketing drone who doesn't know about hexadecimal.
In fact, maybe whoever actually thought of the naming scheme planned it exactly like that, but that reasoning somehow failed to get communicated.
I wish them success and hope to see them get plenty of design wins. I have never actually owned a PowerVR product, but I've been sort of a distant fan (over here in Android land) and look forward to seeing how they perform. If they do well, who knows, they may even reenter the console market with the next-gen portables.
Comparison with desktop parts: 2 TFLOPS single-precision is comparable with the Geforce GTX 960 (2.3 TFLOPS) or GTX 1050 (~2 TFLOPS). A GTX 1080 (vanilla, non-super) does about 11TFLOPS.
64 gigapixels/s (fill rate) is comparable with a Geforce GTX 970 or some GTX 1060 parts. A GTX 1080 (vanilla, non-super) does about 105 gigapixels/s.
These simple metrics do not indicate that Imagination's new GPU has real-world performance similar to these cards, of course, but the raw numbers are impressive for a low-wattage part.
So what you're saying is that this "fastest GPU IP ever created" has theoretical throughput figures that are lower than a two-generation-old midrange desktop parts.
Man, it's gonna be exciting when this is released and it's total unmitigated shite, like every mobile GPU ever.
For me a more useful comparison point is the consoles. Xbox One S is 1.4 TFLOPS, PS4 is 1.84 TFLOPS, and, more to the point, Switch supposedly reaches 1 TFLOPS for 16 bit at maximum, but in practice, and for 32 bit, it's around 400 GFLOPS (when docked).
So in theory the AXT-64-2048 could make for quite a decent low power console chip, and a good upgrade venue for Nintendo.
(Sure, Xbox and PS have moved a little forward since then, and will move more next year, but, as an owner of a One S, I still find it quite impressive what can be achieved with this kind of GPU power.)
Nintendo Switch uses the Tegra X1, which was made to be a high-end tablet SoC. So, by extension, it's not surprising that a modern candidate for that application would potentially be a worthy successor for the Switch.
Speaking of set top consoles, you're citing 2013-era models (okay, the One S is more recent, but really a small tweak on the original spec). If you instead look at the PS4 Pro and One X, then you'll see that the set top consoles have moved far beyond this GPU.
Imagination was in trouble for a long time. The reason Apple, and Microsoft before that, left, was because Imagination refused to go along with requests from both companies for custom IP. Apple, for example needed more work on AI and ML. Imagination refused to work on that for them, which was a major mistake, as Apple was half their business, and generating more than half of their profit.
When Apple announced they were developing their own GPU, they said that within two years they would no longer be using any Imagination IP. Imagination confirmed that. The assumption there was that older SoCs that Apple would continue to use for other devices would still incorporate the IP until they had been superseded by newer versions.
It’s believed that newer Apple SoCs contain no Imagination IP.
It’s interesting to see that this new Imagination IP seems to be close to what Apple wanted, but what Imagination refused to give them. A fascinating turnabout. Now it remains to be seen whether this serious improvement upon their older IP is really competitive with the newest IP from others, when it actually is in production, assuming it will really be used.
> When Apple announced they were developing their own GPU, they said that within two years they would no longer be using any Imagination IP. Imagination confirmed that.
The only thing Imagination confirmed is that Apple told them that. Ironically all those press releases and all official mentions of this have disappeared from both companies, which is essentially a sign that the two companies burried the hatchets and they came under some form of agreement.
> It’s believed that newer Apple SoCs contain no Imagination IP.
Well no, we're still here two years later. Apple's GPUs still very much look like PowerVR GPUs with similar block structures, they are still using IMG's proprietary TBDR techniques, and even publicly expose proprietary features such as PVRTC. Saying Apple GPUs contain none of IMG's IP is just incompetent on the topic.
Andrei is as close to correct as one can hope to be.
All we ACTUALLY know is that PowerVR has NEVER sued Apple. They said some pissed off things in a Press Release, then mostly took them back in a later Press Release. And Apple said it would cease to pay to *royalties* to Imagination within a period of 15 months to two years.
It is possible that Apple created something de novo that's free and clear of IMG and that's that. It's also possible that Apple paid (and even continues to pay?) IMG large sums of money that are not "royalties" but something else (IP licensing? patent purchase? ...). We have no idea which is the case.
Apple is large enough that the money paid out could be hidden in pretty much any accounting line item. IMG is private so have no obligation to tell us where their money comes from.
Very true. If you were to run GFXBENCH on the new iPhone 11PRO with the A13, you will see that the gldrivers are all powervrs and IMGtc just the previous A12. Exactly the same. So in my mind the A13 still has some form of PowerVR design in it.
To be honest I think Apple still has some PowerVR design into the current chips, even in the A13, otherwise I can't explain why it appears the GLdrivers (PowerVR, img texture text etc) when I run the GFX bench.... I think Apple and Imagination still have in somehow some form of good relationship behind...
"ISO-area and process node comparison" Goddamn. So refreshing to see a comparison under such standard conditions yet still showing up in footnotes. This is the industry that I recognize, not Huawei's deceitful comparison during the launch of GPU Turbo with age old design and age old architecture *without* due clarification for months.
> There are very few companies in the world able to claim a history in the graphics market dating back to the “golden age” of the 90’s. Among the handful of survivors of that era are of course NVIDIA and AMD (formerly ATI), but there is also Imagination Technologies.
Well... if you're going to count ATI, who got gobbled up by AMD, then you might also count Qualcomm's Adreno, which came from BitBoys Oy and passed through ATI/AMD.
The idea of a super-wide SIMD seems somewhat at odds with tiled-rendering. Unless you can scale up your tile sizes (which might be how they got away with it), it seems that it'd be difficult to pack your 128-lane SIMD with conditionally-coherent threads, if you're also limiting the parallelism with a spatial coherency constraint.
I believe IMG has, also, proposed good solutions in the past. Problem was they never got to market as they never been licensed. We only have seen some low-midrange solution in some MediaTek SoC that never shined and nobody even bothered. Now the main question still remains, will IMG be able to license high end solutions to third parties in order to put our hands on? Otherwise it still will be another paper show off and nothing more...I am afraid...: 😦
This is not a new problem for chip (or IP) companies. The job of a good sales & marketing team is to engage with potential customers and figure out what specs their product would need to have to potentially win their business.
Of course, whatever the competition & end-user markets do are wildcards you can't control.
Indeed...but we only have seen this just few months ago in the market and it's not even the Furian version. The only chips have seen around have the g8320 ...come on...they really are low..low...low range. I wished to have seen some 9XT around but it didn't happen and perhaps never will. Now look forward to seeing this new A series....but my doubts still remain...I hope to be wrong..
Why do you call wider vectors "thread-level parallelism"? Seems the opposite of the meaning of threads as threads must be able to execute different pieces of code.
Nah, not in GPU parlance. Nvidia has long talked about each element of a SIMD as a thread. What CPU folks would call a "thread", Nvidia calls a "warp" and AMD calls a "wave". Not sure what Imagination calls it.
"...with for example each HyperLane having being able to be configured with its own virtual memory space..." Excess words, try: "...with for example each HyperLane having to be configured with its own virtual memory space..."
I bet you this GPU IS the Intel one, and Furian is the Ice Lake one. 64EU's at 1TFLOP = Ice Lake, 1GHz. Furian is 1/2 the speed of the A series.
Intel's GPU is Xe. thisi s A - XE, XT, XM But the base of 1/16 is called..... XE. Add to that this https://news.synopsys.com/2016-03-31-Intel-Custom-... "certified on Intel 10nm PowerVR GT7200" Who's buying this on Intel's process, and why is it ceritifed on IMGtech's GPU, if its not their integrated graphics under a different name?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
143 Comments
Back to Article
extide - Monday, December 2, 2019 - link
Not exactly an announcement I was expecting to see, but quite interesting. A bit ballsy, but that's pretty much what they need right now. We'll see how it shapes up in actual products...Kishoreshack - Monday, December 2, 2019 - link
This if's & but's in the industry is soo irritatingif it releases on time it will compete well
according to its release schedule we might be at the ending cycle of Mali G77 & introduction of G78
So the performance gains are almost negligible
Pessimist9 - Wednesday, December 4, 2019 - link
But they said it! Quick! Everyone to their brokers STAT!Kishoreshack - Monday, December 2, 2019 - link
Companies overpromising & under delivering is normCOUGH *INTEL*
Seriously until the GPU hits the market
I'm seriously not interested in speculation
I'm over speculation need result's to even be excited about this
Until it is implemented in real world
the idea remains a big cloud of doubt
Cause many companies fail to stick to their release schedule
eek2121 - Tuesday, December 3, 2019 - link
Intel has real GPU products currently, they are just integrated with Intel CPUs. I definitely wouldn't say Intel is 'behind schedule' with Xe either. If you read between the lines, they are shipping 7nm products in 2021, and one of these products is a high performance GPU. As a matter of fact, despite what Intel would have you believe, I suspect that they'll keep their 10nm launches to a minimum and jump straight to 7nm. It also would not surprise me if there is a shake-up in Intel's future that causes their fab business to get spun off into it's own entity to ensure that stuff like this doesn't happen again. That last part is pure speculation though.I'd be rather curious to see how an Intel/AMD/Nvidia GPU stacks up against the Imagination one.
Also note that apparently they aren't shipping open source drivers, which right away is going to create issues for them. NVIDIA is one of the few companies that chose this route, and thus far it's caused nothing but problems for both NVIDIA and the community at large.
Korguz - Tuesday, December 3, 2019 - link
" they are shipping 7nm products in 2021, " yea right,, ill believe that, when it actually happens.." causes their fab business to get spun off into it's own entity" i doubt that would happen. 10nm is VERY late, because they tried to do to much at once, or there is another reason, who knows.. but to spin it off??? not likely
regsEx - Tuesday, December 3, 2019 - link
10 nm is DUV process of highest ever density. It's denser than TSMC 7FF DUV. 7 nm, which is generation of TSMC N3, is EUV. It's easier.TheinsanegamerN - Tuesday, December 3, 2019 - link
That's why they are now 4 years behind schedule, right?mode_13h - Wednesday, December 4, 2019 - link
Exactly.I wonder which hurt Intel more: their over-ambitious process engineers or their greedy management. Either way, while they were sputtering and cavitating, the competition blew right by them.
Their once-unassailable lead was outmatched by their own hubris.
Santoval - Tuesday, December 3, 2019 - link
Intel's 7nm node is apparently going to be slightly denser than TSMC's 5nm node (N5), *not* their 3nm node (N3). I have read about numbers in the range of ~185 million transistors (MTr) per mm^2 for TSMC's N5 and ~200 MTr/mm^2 for Intel's 7nm node. TSMC's N3, in turn, will be 255+ MTr/mm^2. Beside a quite higher density TSMC will switch to GAA-FETs at 3nm, so there can be no direct comparison anyway. In any case Intel will need to deliver 7nm first, and I strongly doubt they will manage to completely master EUV manufacturing of multiple layers by 2021.By the way, Intel's 10nm is not denser than TSMC's N7 DUV. Intel has developed three 10nm variants, for low, mid and high density. Only the high density (and highest performance) 10nm variant is slightly denser than TSMC's N7 DUV and even that was already outclassed in density by TSMC's N7 EUV (7nm+).
To my knowledge Intel employs the mid density 10nm variant for the mobile Ice Lake parts they just released, so they haven't released and aren't about to release any 10nm SoC, CPU or GPU that is denser than even TSMC's vanilla (DUV) N7. Their sole high density (100+ MTr/mm^2) 10nm parts are probably their newest Agilex FPGAs.
Qasar - Tuesday, December 3, 2019 - link
SantovalSources ??
WaltC - Thursday, December 5, 2019 - link
Intel desperately needs a new x86 architecture--not just a smaller production process with good yields. Process alone isn't going to help them much, imo.peevee - Monday, December 9, 2019 - link
Intel needs a non-x86 architecture which is better than ARM and RISC-V. Which is of course totally possible, and probably only Intel can deliver all the way (from technical to marketing perspectives) giving them competitive advantage.Korguz - Tuesday, December 10, 2019 - link
they already tried that something like that.. remember ia64 and itanium ??regsEx - Tuesday, December 3, 2019 - link
Doesn't really matter how it stacks up. They don't do consumer products. I've been asking them why, when they released 7 series. They said - contact our partners for that questions... Yeah, what partner? That's why they went bankrupt. Not because of loosing Apple, but because of absence on consumer marketmode_13h - Wednesday, December 4, 2019 - link
Huh? They're just like ARM, except they only do GPUs. ...until they bought MIPS, which turned out badly.But, I mean, would you say ARM is a failure because they don't do consumer products? I wouldn't.
There are other non-failed companies that only sell IP, such as Tensilica, CEVA, and I'm guessing many others that sell things like PCIe interfaces, DRAM controllers, modems, etc.
regsEx - Wednesday, December 4, 2019 - link
ARM doesn't have much of competition in mobile market, so they can easily rely on wholesale. GPUs is completely different. There is a huge competition and relying on single partner was a deadly mistake.Fataliity - Saturday, December 28, 2019 - link
I bet you this GPU IS the Intel one, and Furian is the Ice Lake one. 64EU's at 1TFLOP = Ice Lake, 1GHz. Furian is 1/2 the speed of the A series.Intel's GPU is Xe. thisi s A - XE, XT, XM
But the base of 1/16 is called..... XE.
Add to that this
https://news.synopsys.com/2016-03-31-Intel-Custom-...
"certified on Intel 10nm PowerVR GT7200"
Who's buying this on Intel's process, and why is it ceritifed on IMGtech's GPU, if its not their integrated graphics under a different name?
Kishoreshack - Monday, December 2, 2019 - link
I think imaginations has been releasing different architectures in pastbut the performance gains were nowhere near it's competition
Until it hits market
It's really hard to be excited about
mode_13h - Wednesday, December 4, 2019 - link
I thought iPhones and iPads were always pretty competitive on graphics benchmarks.s.yu - Wednesday, December 4, 2019 - link
Oh they are, my iPP 1st gen from 2015 is finally starting to see some age, in practical graphics loads.mode_13h - Sunday, December 8, 2019 - link
Thanks for confirming that. I appreciate all of your posts in this thread.webdoctors - Monday, December 2, 2019 - link
Are they OK? I thought they were gonna shutdown after Apple dropped them as a vendor. After the canyon bridge buyout, do they have enough cash to keep operating?Hopefully they can keep the lights on
MrCommunistGen - Monday, December 2, 2019 - link
If the GPU architecture delivers a even some (most?) of what they're promising I could see them being bought for the IP and the A-series architecture coming to life in a future SoC brought to you by a different company.yeeeeman - Tuesday, December 3, 2019 - link
Apple dropped them cause they copy pasted the design so there was no reason to keep licensing it from them.Spunjji - Tuesday, December 3, 2019 - link
Yup. They tried to do this with Qualcomm's modems, too, by feeding their data to the Intel engineers.The difference is that Qualcomm had the funds to fight back.
melgross - Tuesday, December 3, 2019 - link
You guys are a joke. Seriously.Korguz - Tuesday, December 3, 2019 - link
you are worse melgrossmelgross - Tuesday, December 3, 2019 - link
No, you guys are just either completely ignorant, or deliberately hateful.name99 - Tuesday, December 3, 2019 - link
Why can't it be both? The two do tend to correlate.Korguz - Tuesday, December 3, 2019 - link
and your not ?? come one melgross.. some of your posts... have pro intel all over them...Korguz - Tuesday, December 3, 2019 - link
case on point : https://www.anandtech.com/show/15162/dell-intel-cp...s.yu - Tuesday, December 3, 2019 - link
Ok? Like how HiSilicon siphoned from Cambricon?levizx - Tuesday, December 3, 2019 - link
How? It's a completely different architecture.vladx - Wednesday, December 4, 2019 - link
Don't mind s.yu, he's the biggest Chinese hater on AnandTech. You would save yourself time and energy just ignoring his rambling.s.yu - Wednesday, December 4, 2019 - link
Yeah...I vaguely remember you vlad, you were singing praises for the Soviet Union and claiming that China is the best everywhere and that its government takes care of every aspect of your life from housing to employment."Chinese hater"
Yeah...common attempts of brainwashed drones to discredit me.
FYI I am anti-Huawei+anti-Emperor Xi>anti-Party!=anti-China
There's a big distinction there, unless you're so ignorant you could oversimplify that as equal.
vladx - Wednesday, December 4, 2019 - link
Your attempts to "poison the well" are just pathetic, s.yu.s.yu - Wednesday, December 4, 2019 - link
I only see you poisoning the well with fiction. I quote multiple sources and why two other trolls scurried off after astroturfing for Huawei again.https://www.anandtech.com/show/15099/the-huawei-ma...
If you're so righteous man up and face me, lace your arguments with insults I don't care but if all you have is nothing but empty insults then that only speaks to your status as a blind drone.
s.yu - Wednesday, December 4, 2019 - link
From when is it completely different, where is it different? A source would be appreciated. Now it might be different, but they're strangely forthcoming regarding the current architecture compared to past NPUs, there's not many points of reference, and Huawei doesn't deserve benefit of the doubt.levizx - Tuesday, December 3, 2019 - link
Also Cambricon is backed by Chinese Academy of Sciences - a government agency. You think they don't have any teeth to bare if Huawei stole from them?s.yu - Wednesday, December 4, 2019 - link
No, on the contrary it's reason that Cambricon has to let them if they do.Note the recent 251 incident in which the Party went all-out for 3 days censoring everything trying to suppress the incident, but ultimately failed and they pulled back, and now Huawei for once shows its true colors to the people.
ksec - Tuesday, December 3, 2019 - link
As mentioned by the poster below, I think they are likely to be bought rather than doing well.If you look at the High to Mid End Smartphone, they are dominated by Apple, Samsung, Huawei all using their own Silicon or Qualcomm chip. Which leaves the low end with Mediatek.
Now the low end market only cares about cost, so ARM is actually a better choice due to IP bundling listening.
So I am not too optimistic, and it is also worth mentioning. For anyone who was old enough to remember what the Golden Era of GPU, there were always a new design that claims to be better on paper, and what happened?
Drivers - It is the software. The single biggest unmentioned roadblock to GPU computing. How to efficiently use the hardware is the key. S3, Matrox... and lots of others, remember those?
If I reading correctly this design put even more importance to drivers.
adriaaaaan - Tuesday, December 3, 2019 - link
Thats definitely true but the Vulkan API actually takes away a lot of the driver aspect from the equation. Thats the reasons it exists on so many platforms, because it doesn't depend on the os quite so muchmode_13h - Wednesday, December 4, 2019 - link
Huawei uses Mali GPUs, right? So, once they drop ARM CPU cores and go with a Chinese RISC-V core, they're gonna need a different GPU supplier. Hence, Imagination.In fact, had Apple not dropped Imagination, I wonder if Verisilicon wouldn't still be pumping Vivante's IP.
ET - Wednesday, December 4, 2019 - link
Matrox is still around. Not doing anything on the 3D chip front, but still using its own ASICs for 2D, far as I understand. Using AMD for 3D.VIA, who bought S3, are still around, but I don't know if its newer CPUs include custom made GPUs.
> there were always a new design that claims to be better on paper, and what happened?
Most of them were also better in practice. Though most of them had drawback. It was quite an interesting time. APIs were quite a problem then, and the limited amount of stuff one could do in hardware, but it definitely was an era of growth.
mode_13h - Wednesday, December 4, 2019 - link
Well, if you're going to stretch that far, you might as well also cite Intel.Threska - Wednesday, December 4, 2019 - link
PowerVR (NEC) in a console as well as a PC part.vladx - Wednesday, December 4, 2019 - link
You just ignored that MediaTek just re-entered the high-end SoC market. Or most likely, if you didn't even bother reading the article.ph00ny - Monday, December 2, 2019 - link
Everyone keeps saying if they bring it out. How does a fabless GPU designing firm like imagination "bring it out"?shabby - Monday, December 2, 2019 - link
They use their... imagination 😂Eliadbu - Tuesday, December 3, 2019 - link
Same as Nvidia AMD Apple ARM and almost all other fabless manufacturers "bring it out". Actual manufacturing is one part of story of creating and bringing processor to life.mode_13h - Wednesday, December 4, 2019 - link
Actually, like *none* of those! Those guys all sell chips.Imagination sells IP to other companies, which bundle it into their SoCs. However, Imagination still needs to work with fabs to make their design compatible with the process nodes customers are likely to want to use.
mode_13h - Wednesday, December 4, 2019 - link
Oops, I see you said ARM. Okay, so that's the one example that's like Imagination - ARM doesn't sell chips, either.rpg1966 - Monday, December 2, 2019 - link
"currently there’s no publicly announced or available chips which make use of the company’s 8XT or 9XTP designs"Does this mean they went through two entire design processes with zero sales? Surely not?
rpg1966 - Monday, December 2, 2019 - link
nvm, that's just the XT series, it seems.Spunjji - Tuesday, December 3, 2019 - link
Yeah, I think they only really ever sold those larger GPUs to Apple before.melgross - Tuesday, December 3, 2019 - link
Correct.ABR - Tuesday, December 3, 2019 - link
Nice article, but need to see products to see it working in practice. So how about that RDNA architecture article? There are a few out there now, working off of AMD's material, but it would be nice to see AT's perspective on how much it resets the competition with Nvidia, and what we could expect Samsung will be able to come up with out of it on mobile.mode_13h - Wednesday, December 4, 2019 - link
Nobody knows the precise details of the AMD/Samsung collaboration, so it's really hard to speculate on how the result will turn out.techjam - Tuesday, December 3, 2019 - link
Exciting news! Hopefully this will push Qualcomm to up their game. Not that the Adreno GPU's are bad but the yearly increases are fairly modest.One concern though, that naming scheme is awful. Starting with A and moving down to D? Seriously?
RaduR - Tuesday, December 3, 2019 - link
Andrei Frumusanu was working for them . So maybe he has more info on how this is going to develop .The only downside for ImgTec is that they are depending on CPU vendors. So if they cannot sell this design to anyone ....
They tried with MIPS but for whatever reason MIPS lost traction . Most probably they wer eunable to sell the design .
Please understand that ImgTec is a very very small company that is fighting in fact with ARM . They are not Mediatek nor Qualcomm . In this market there is a lot of completion.
We have : Vivante in the lowend. Broad iMM has Videocore , ARM sells MALI together with Cortex designs. So how can ImgTec survive ?
I see the only option would have been MIPS + PowerVR or to be taken over by a company like Mediatek . I am still wondering why Intel did not buy them for the cheap or Mediatek .
Andrei Frumusanu - Tuesday, December 3, 2019 - link
I left back in November 2017 and avoided coverage till now due to any conflict of interest. The A-Series is beyond the horizon of future knowledge I had from back then so it was new to me, I don't have any more info beyond my estimates that I wrote.Vivante is effectively dead and so is the Videocore lineup, the best case scenario here is a 50/50 split with Arm. The CPU I thing I don't think it's a limitation as long as the GPU in fact does deliver on competitive PPA.
ZolaIII - Tuesday, December 3, 2019 - link
I wouldn't exactly say Unisoc adoption of it would have a small impact (tho they are still recovering from bad Intel's influence) nor would I write of possibility of HiSilicone adoption (more so as they are keen on ARM for US ban compliance & after all Chinese IPO owns Imagination now). Actually this is going so far right now that RISC V foundation is moving out of US to neutral grounds to ensure that same fate that struck the ARM cannot happen to them.vladx - Wednesday, December 4, 2019 - link
ARM is also planning to move remaining R&D centers from US to avoid any chance of US ban in the future.vladx - Wednesday, December 4, 2019 - link
Did anyone use Vivante designs in their SoCs in the past 3 years or so?GruenSein - Tuesday, December 3, 2019 - link
Uncertainties about ImgTech's claims and promises aside, I am wondering who is supposed to be the customer.Apple is developing their own GPUs now.
Samsung is going with AMD's RDNA.
Qualcomm has their Adreno GPU.
HiSilicon is using ARM's Mali.
I fear that this will be a very niche product unless it absolutely dominates all other solutions.
Raqia - Tuesday, December 3, 2019 - link
If they remain independent, I think it'll be anyone who wants something better than Mali or Intel that don't have their own GPU or haven't partnered up, so with Intel also designing their own GPU cores I guess the main customer would be Mediatek, and a handful of other even smaller licensees like Broadcom for things like its Raspberry Pi SoC, NXP, STM etc.When compared to CPU designs which are becoming increasingly commoditized by ARM's freely licensed and very good SIP cores (with only nVidia and Apple doing their own custom cores in volume going forward), efficient GPU cores continue to be highly specialized and a well sought after technology. Imagination would also be a layup acquisition for any company besides Apple, AMD, or Qualcomm, so I could Intel, Samsung or ARM buying them in the future.
Raqia - Tuesday, December 3, 2019 - link
Also Marvell and other TV / automakers.name99 - Tuesday, December 3, 2019 - link
Amazon made a big deal about GPU support and AI inference in their Graviton 2 announcement. They might be an unexpected client?(For that to work, however, IMG might have to be more flexible in terms of being willing to scale up/drop functionality to match AMZ's needs. They were apparently unwilling to be that flexible for Apple... But hey, near death experience can sometimes teach...)
mode_13h - Wednesday, December 4, 2019 - link
I'm sure Amazon is just talking about Nvidia and possibly AMD. Nvidia is officially supporting their software stack on ARM, and AMD's is opensource and could be recompiled for ARM (hey, it works on POWER!).ET - Tuesday, December 3, 2019 - link
> I fear that this will be a very niche product unless it absolutely dominates all other solutions.At least from the description in the article, it seems to dominate Mali. Even next gen Mali. I don't expect Apple or Qualcomm to move. Samsung I think would be flexible. It's impossible to say how well RDNA fits all price points or when it will arrive, so ImgTech could find a place there. And with the A series supposedly much better than Mali in performance per silicon, I don't think that HiSilicon using it is totally out of the question.
Spunjji - Tuesday, December 3, 2019 - link
No reason HiSilicon can't change their minds if there's a compelling reason. PPA advantages directly translate into cost savings, which is very compelling indeed.MediaTek are probably going to be the biggest customer, though.
mode_13h - Wednesday, December 4, 2019 - link
Chinese want nothing to do with ARM, any more. Got it?So, anyone who's using Mali, or any Chinese phone makers who are using Qualcomm are potential customers.
vladx - Wednesday, December 4, 2019 - link
If you bothered to read the article, you would've found the answer but I guess Americans can't be bothered to read.s.yu - Wednesday, December 4, 2019 - link
"Americans can't be bothered to read"Wow, calling others haters, and look at you.
Etain05 - Tuesday, December 3, 2019 - link
I know that they don’t actually compete, since Apple will never offer its design for licensing, but I still think it’s interesting to compare them.Let’s take the numbers from the Huawei Mate 30 Pro review and compare making some assumptions.
Andrei says: “The comparison implementation here would be an AXT-16-512 implementation running at slightly lower than nominal clock and voltage (in order to match the performance).”
Let’s assume the AXT-16-512 is underclocked by 10% to get to the same performance as the Exynos 9820 and Snapdragon 855. Let’s also assume that an AXT-32-1024 is exactly double the performance of the AXT-16-512.
So, a nominally clocked AXT-16-512 would have 110% the performance of the Snapdragon 855 and Exynos 9820. Double that, and you get 220% the performance, for the AXT-32-1024.
Looking at the Huawei review, here are the numbers:
GFXBench Aztec Ruins High
Exynos 9820 and Snapdragon 855: ~16fps —> AXT-32-1024: 16fps + 120% = 35,2fps
Apple A13: 34fps
GFXBench Aztec Ruins Normal
Exynos 9820 and Snapdragon 855: ~40fps —> AXT-32-1024: 40fps + 120% = 88fps
Apple A13: 91fps
GFXBench Manhattan 3.1
Exynos 9820 and Snapdragon 855: ~69,5fps —> AXT-32-1024: 69,5fps + 120% = 153fps
Apple A13: 123,5fps
GFXBench T-Rex
Exynos 9820 and Snapdragon 855: ~167fps —> AXT-32-1024: 167fps + 120% = 367fps
Apple A13: 329fps
It seems that at least on performance (with generous assumptions), if the new architecture fulfils all promises, it would be competitive, even slightly better than the Apple A13. The problem is that it won’t compete with the A13, but the A14...
How did we get to Apple dominating GPUs too, so fast?
drexnx - Tuesday, December 3, 2019 - link
they're totally unafraid to spend as much die space as they need to get their performance scaling. look at a history of Ax die sizes and you'll see they're all over the placeSpunjji - Tuesday, December 3, 2019 - link
Agreed. It's their vertical integration at work - they're the only company prepared to spend that much die area on performance because they're the only company besides Samsung that can guarantee to sell all every chip they make in a high-end, high-margin device.Andrei Frumusanu - Tuesday, December 3, 2019 - link
Apple's GPUs are the second smallest in the space - only Qualcomm uses less die area.vladx - Wednesday, December 4, 2019 - link
When you extort your customers like Apple does, you can afford to design more expensive SoCs while still keeping huge profits.Apple is the biggest example of what a toxic system capitalism can become.
s.yu - Wednesday, December 4, 2019 - link
"Apple is the biggest example of what a toxic system capitalism can become. "Clear sign of a hater, vlad.
"Huawei is the biggest example of what a toxic system state capitalism-cum-corrupt monarchy can become. "
It could direct authorities to jail an individual for 251 days with false testimonies only to be proven innocent and compesated with a recording he kept, and those who lied under oath are never held accountable.
Huawei could frame somebody, to be jailed using the state apparatus supported by taxpayers, to be compensated using tax money when proven innocent, without expending a single cent from its pocket or giving so much as single apology when exposed. Yeah that's so much better than Apple.
Korguz - Wednesday, December 4, 2019 - link
he's right apple does charge way to much for their products. all apple cares about.. is its profits....Threska - Wednesday, December 4, 2019 - link
Who isn't selfish? Companies care about profits. Consumers care about the lowest price.Korguz - Wednesday, December 4, 2019 - link
not like apple does.. their stuff is very overpriced....s.yu - Wednesday, December 4, 2019 - link
https://www.sixthtone.com/news/1004918/huawei-is-i...Everybody who has idealistic views of Huawei should be reading this, vlad accuses me of being a hater and look what he's doing.
s.yu - Wednesday, December 4, 2019 - link
No shit that was 404'd.https://www.bbc.com/news/technology-50658787
This is a BBC article but with much fewer details.
s.yu - Wednesday, December 4, 2019 - link
Key points here:1. Huawei's HR lead a few employees to lie under oath to start the investigation against him.
2. Authorities had the choice between detaining him and not detaining him, all they had to go on was Huawei's testimonies, they detained him siding with Huawei despite circumstantial evidence that the accusations were likely false.
3. He was investigated due to another false accusation from Huawei a few months in for an extension on his jail time.
4. Another employee was jailed under similar circumstances but gave in and wrote a confession under Huawei's promise no to press charges, which Huawei immediately seized, and brought to court.
5. He only discovered the reason to jail him when he met his lawyer appointed by his wife, which was already months into his effective sentence, only then did he disclose that he had a recording of his discussion with the company regarding compensations(and multiple backups, some of which survived police search during his arrest), which proved his innocence.
6. Upon procecutors terminating investigations on revelation from the recording and releasing him with compensation from the state, Huawei immediately modified their testimony.
7. There was never an apology nor compensation from Huawei for framing Li, not in an official capacity, not by the employees and the HR who gave the false testimonies, and the individuals who lied under oath were never prosecuted nor even investigated.
8. In the first 2-3 days of the incident there was intense censorship, to a scale probably unimaginable by an outsider, but as the Party realized this could not be suppressed, which brings us to where we are now. They turned to attempting to dictate the public discourse with censored reports and obfuscated details, and encouraging the spread of effectively irrelevant content defending Huawei from ideological and emotional standpoints.
ksec - Tuesday, December 3, 2019 - link
That assumes the drivers from IMG or Vendors could make their GPU perform as fast as it could.mode_13h - Wednesday, December 4, 2019 - link
Only because they refuse to open source (or publish details to support open source driver development).lucam - Tuesday, December 3, 2019 - link
Do not forget there is also the AXT-48-1536 for premium mobile that should go even faster than the 1024 and therefore easily compete with the future A14Etain05 - Tuesday, December 3, 2019 - link
I think that’s for tablets, and the AXT-64-2048 is just a possible option. If you want to consider the AXT-48-1536, you’d have to compare it to the A12X and whatever comes next.vladx - Wednesday, December 4, 2019 - link
Well B series should be available to customers in 2020 so a BXT-32-1024 should crush the A14 assuming the 30% improvements will be accomplished.Etain05 - Wednesday, December 4, 2019 - link
The A series will be in devices in the second half of 2020, there’s absolutely no way that the B series will be available to customers in 2020. The B series will be available in 2021, when it will compete with the A15.vladx - Wednesday, December 4, 2019 - link
By customers I meant SoC manufacturers, like MediaTek and phones using B-series SocS could be made available before A15 since Apple releases new phones in Q4 usually.Etain05 - Wednesday, December 4, 2019 - link
Even if we were to be generous in our assumptions, the soonest we’ll see any device with a B-series would be Q3 2021, so at most a few months before the A15. The competition would still be the A15, not the A14. Even more importantly, there’s almost no smartphone that comes out in Q3, apart from some in August and September, which is exactly the time when the A15 will be out too.rbanffy - Tuesday, December 3, 2019 - link
It's still a numeric naming scheme. 0xA comes after 0x9, after all.mode_13h - Wednesday, December 4, 2019 - link
Yeah, but he asked and they denied it. They should've been like "hey... yeah! yeah, that's exactly right!", except the question was probably fielded by an ignorant marketing drone who doesn't know about hexadecimal.In fact, maybe whoever actually thought of the naming scheme planned it exactly like that, but that reasoning somehow failed to get communicated.
domboy - Tuesday, December 3, 2019 - link
Oh... so they aren't announcing a new Kyro video card... oh well... :/mode_13h - Wednesday, December 4, 2019 - link
Nice one. At first, I thought you confused it with Qualcomm's "Kryo" cores, but misspelled it.https://en.wikipedia.org/wiki/PowerVR#Series3_(STM...
https://en.wikipedia.org/wiki/Kryo
wrkingclass_hero - Tuesday, December 3, 2019 - link
I wish them success and hope to see them get plenty of design wins. I have never actually owned a PowerVR product, but I've been sort of a distant fan (over here in Android land) and look forward to seeing how they perform. If they do well, who knows, they may even reenter the console market with the next-gen portables.mode_13h - Wednesday, December 4, 2019 - link
Sega Dreamcast, biatch.vladx - Wednesday, December 4, 2019 - link
Their SGX544MP4 was pretty competitive for its time, I used phone using that GPU and performance was pretty great for 2013.Rudde - Tuesday, December 3, 2019 - link
The top model (64-2048) running at 1GHz would beat both AMD's and Intel's current integrated gpu lineups in theory.Spunjji - Tuesday, December 3, 2019 - link
Drivers would be the killer there, though.mode_13h - Wednesday, December 4, 2019 - link
Yeah, if only ImgTec would, you know, open source their crap, maybe then someone like Valve could come along and drop some ACO on it.https://www.gamingonlinux.com/articles/the-valve-f...
mode_13h - Wednesday, December 4, 2019 - link
I dunno, mang. Ryzen 5 3400G is spec'd at 1971.2 GFLOPS. Pretty close, in terms of raw compute.xenol - Tuesday, December 3, 2019 - link
The performance increase graph on the first page cracks me up. It's like marketing forgot the basics of how to make a meaningful graph.mode_13h - Wednesday, December 4, 2019 - link
yeah... "forgot"vladx - Tuesday, December 3, 2019 - link
Let's Go IMG Tech, we need more competition in the mobile GPU sector!Sivar - Tuesday, December 3, 2019 - link
Comparison with desktop parts:2 TFLOPS single-precision is comparable with the Geforce GTX 960 (2.3 TFLOPS) or GTX 1050 (~2 TFLOPS).
A GTX 1080 (vanilla, non-super) does about 11TFLOPS.
64 gigapixels/s (fill rate) is comparable with a Geforce GTX 970 or some GTX 1060 parts.
A GTX 1080 (vanilla, non-super) does about 105 gigapixels/s.
These simple metrics do not indicate that Imagination's new GPU has real-world performance similar to these cards, of course, but the raw numbers are impressive for a low-wattage part.
The_Assimilator - Tuesday, December 3, 2019 - link
So what you're saying is that this "fastest GPU IP ever created" has theoretical throughput figures that are lower than a two-generation-old midrange desktop parts.Man, it's gonna be exciting when this is released and it's total unmitigated shite, like every mobile GPU ever.
ET - Wednesday, December 4, 2019 - link
For me a more useful comparison point is the consoles. Xbox One S is 1.4 TFLOPS, PS4 is 1.84 TFLOPS, and, more to the point, Switch supposedly reaches 1 TFLOPS for 16 bit at maximum, but in practice, and for 32 bit, it's around 400 GFLOPS (when docked).So in theory the AXT-64-2048 could make for quite a decent low power console chip, and a good upgrade venue for Nintendo.
(Sure, Xbox and PS have moved a little forward since then, and will move more next year, but, as an owner of a One S, I still find it quite impressive what can be achieved with this kind of GPU power.)
mode_13h - Wednesday, December 4, 2019 - link
Nintendo Switch uses the Tegra X1, which was made to be a high-end tablet SoC. So, by extension, it's not surprising that a modern candidate for that application would potentially be a worthy successor for the Switch.Speaking of set top consoles, you're citing 2013-era models (okay, the One S is more recent, but really a small tweak on the original spec). If you instead look at the PS4 Pro and One X, then you'll see that the set top consoles have moved far beyond this GPU.
Lolimaster - Tuesday, December 3, 2019 - link
The just lost it, now even qorse with amd makibg its return to arm socs.melgross - Tuesday, December 3, 2019 - link
Imagination was in trouble for a long time. The reason Apple, and Microsoft before that, left, was because Imagination refused to go along with requests from both companies for custom IP. Apple, for example needed more work on AI and ML. Imagination refused to work on that for them, which was a major mistake, as Apple was half their business, and generating more than half of their profit.When Apple announced they were developing their own GPU, they said that within two years they would no longer be using any Imagination IP. Imagination confirmed that. The assumption there was that older SoCs that Apple would continue to use for other devices would still incorporate the IP until they had been superseded by newer versions.
It’s believed that newer Apple SoCs contain no Imagination IP.
It’s interesting to see that this new Imagination IP seems to be close to what Apple wanted, but what Imagination refused to give them. A fascinating turnabout. Now it remains to be seen whether this serious improvement upon their older IP is really competitive with the newest IP from others, when it actually is in production, assuming it will really be used.
Andrei Frumusanu - Tuesday, December 3, 2019 - link
> When Apple announced they were developing their own GPU, they said that within two years they would no longer be using any Imagination IP. Imagination confirmed that.The only thing Imagination confirmed is that Apple told them that. Ironically all those press releases and all official mentions of this have disappeared from both companies, which is essentially a sign that the two companies burried the hatchets and they came under some form of agreement.
> It’s believed that newer Apple SoCs contain no Imagination IP.
Well no, we're still here two years later. Apple's GPUs still very much look like PowerVR GPUs with similar block structures, they are still using IMG's proprietary TBDR techniques, and even publicly expose proprietary features such as PVRTC. Saying Apple GPUs contain none of IMG's IP is just incompetent on the topic.
melgross - Tuesday, December 3, 2019 - link
Well, I’m going by what Apple themselves have said. So if you think they’re lying, good for you. But I’ll take their statements as fact first.Qasar - Tuesday, December 3, 2019 - link
just like you seem to do with intel ???mode_13h - Wednesday, December 4, 2019 - link
You saw that Andrei worked there 'till 2017, right? So, yeah, go ahead and argue with him. You're obviously the expert, here.Korguz - Wednesday, December 4, 2019 - link
mode_13h, of course he is. he believes all the lies and BS that intel is also saying....vladx - Wednesday, December 4, 2019 - link
Lol taking the words of a shady company like Apple as a fact, good one melgrosss.s.yu - Wednesday, December 4, 2019 - link
Championing a shady company like Huawei, good one vlad.name99 - Tuesday, December 3, 2019 - link
Andrei is as close to correct as one can hope to be.All we ACTUALLY know is that PowerVR has NEVER sued Apple. They said some pissed off things in a Press Release, then mostly took them back in a later Press Release.
And Apple said it would cease to pay to *royalties* to Imagination within a period of 15 months to two years.
It is possible that Apple created something de novo that's free and clear of IMG and that's that.
It's also possible that Apple paid (and even continues to pay?) IMG large sums of money that are not "royalties" but something else (IP licensing? patent purchase? ...).
We have no idea which is the case.
Apple is large enough that the money paid out could be hidden in pretty much any accounting line item. IMG is private so have no obligation to tell us where their money comes from.
mode_13h - Wednesday, December 4, 2019 - link
Yeah, it could've been a one-time lump sum payment. That's not technically "royalties".lucam - Tuesday, December 3, 2019 - link
Very true. If you were to run GFXBENCH on the new iPhone 11PRO with the A13, you will see that the gldrivers are all powervrs and IMGtc just the previous A12. Exactly the same. So in my mind the A13 still has some form of PowerVR design in it.vladx - Wednesday, December 4, 2019 - link
Don't kid yourself, Apple tried to buy IMG Tech on the cheap and I'm glad Imagination refused their insulting offer.lucam - Wednesday, December 4, 2019 - link
To be honest I think Apple still has some PowerVR design into the current chips, even in the A13, otherwise I can't explain why it appears the GLdrivers (PowerVR, img texture text etc) when I run the GFX bench....I think Apple and Imagination still have in somehow some form of good relationship behind...
s.yu - Tuesday, December 3, 2019 - link
"ISO-area and process node comparison"Goddamn. So refreshing to see a comparison under such standard conditions yet still showing up in footnotes. This is the industry that I recognize, not Huawei's deceitful comparison during the launch of GPU Turbo with age old design and age old architecture *without* due clarification for months.
Sychonut - Tuesday, December 3, 2019 - link
Now imagine this, but on 14+++++++. A beauty.mode_13h - Tuesday, December 3, 2019 - link
> There are very few companies in the world able to claim a history in the graphics market dating back to the “golden age” of the 90’s. Among the handful of survivors of that era are of course NVIDIA and AMD (formerly ATI), but there is also Imagination Technologies.Well... if you're going to count ATI, who got gobbled up by AMD, then you might also count Qualcomm's Adreno, which came from BitBoys Oy and passed through ATI/AMD.
mode_13h - Tuesday, December 3, 2019 - link
> after over 20 years it's no longer a brand in and of itselfOnly 20 years? Pfft.
After 55 years Ford's Mustang is still around, and it's now an electric SUV.
And long after x86 is a thing of the past, you'd better believe Intel will *still* be using the Pentium branding for at least some of their CPUs.
Goshi112112 - Tuesday, December 3, 2019 - link
Goodmode_13h - Wednesday, December 4, 2019 - link
The idea of a super-wide SIMD seems somewhat at odds with tiled-rendering. Unless you can scale up your tile sizes (which might be how they got away with it), it seems that it'd be difficult to pack your 128-lane SIMD with conditionally-coherent threads, if you're also limiting the parallelism with a spatial coherency constraint.lucam - Wednesday, December 4, 2019 - link
I believe IMG has, also, proposed good solutions in the past. Problem was they never got to market as they never been licensed. We only have seen some low-midrange solution in some MediaTek SoC that never shined and nobody even bothered.Now the main question still remains, will IMG be able to license high end solutions to third parties in order to put our hands on?
Otherwise it still will be another paper show off and nothing more...I am afraid...: 😦
mode_13h - Wednesday, December 4, 2019 - link
This is not a new problem for chip (or IP) companies. The job of a good sales & marketing team is to engage with potential customers and figure out what specs their product would need to have to potentially win their business.Of course, whatever the competition & end-user markets do are wildcards you can't control.
vladx - Wednesday, December 4, 2019 - link
I wouldn't consider the Helio P90 as low-midrange, in fact it's close to a Snapdragon 730 performance-wise.lucam - Wednesday, December 4, 2019 - link
Indeed...but we only have seen this just few months ago in the market and it's not even the Furian version. The only chips have seen around have the g8320 ...come on...they really are low..low...low range. I wished to have seen some 9XT around but it didn't happen and perhaps never will. Now look forward to seeing this new A series....but my doubts still remain...I hope to be wrong..nvmnghia - Saturday, December 7, 2019 - link
So today's smartphones have these for AI:- DSP
- "neural engine"
- CPU (is there an instruction/separate die area for this?)
- GPU
mpbello - Monday, December 9, 2019 - link
Are they going to offer open source drivers for this new series?peevee - Monday, December 9, 2019 - link
Why do you call wider vectors "thread-level parallelism"? Seems the opposite of the meaning of threads as threads must be able to execute different pieces of code.mode_13h - Monday, December 9, 2019 - link
Nah, not in GPU parlance. Nvidia has long talked about each element of a SIMD as a thread. What CPU folks would call a "thread", Nvidia calls a "warp" and AMD calls a "wave". Not sure what Imagination calls it.ballsystemlord - Friday, December 20, 2019 - link
Grammar error:"...with for example each HyperLane having being able to be configured with its own virtual memory space..."
Excess words, try:
"...with for example each HyperLane having to be configured with its own virtual memory space..."
Fataliity - Saturday, December 28, 2019 - link
I bet you this GPU IS the Intel one, and Furian is the Ice Lake one. 64EU's at 1TFLOP = Ice Lake, 1GHz. Furian is 1/2 the speed of the A series.Intel's GPU is Xe. thisi s A - XE, XT, XM
But the base of 1/16 is called..... XE.
Add to that this
https://news.synopsys.com/2016-03-31-Intel-Custom-...
"certified on Intel 10nm PowerVR GT7200"
Who's buying this on Intel's process, and why is it ceritifed on IMGtech's GPU, if its not their integrated graphics under a different name?