Intel’s ‘Tick-Tock’ Seemingly Dead, Becomes ‘Process-Architecture-Optimization’
by Ian Cutress on March 22, 2016 6:45 PM ESTAs reported at The Motley Fool, Intel’s latest 10-K / annual report filing would seem to suggest that the ‘Tick-Tock’ strategy of introducing a new lithographic process note in one product cycle (a ‘tick’) and then an upgraded microarchitecture the next product cycle (a ‘tock’) is going to fall by the wayside for the next two lithographic nodes at a minimum, to be replaced with a three element cycle known as ‘Process-Architecture-Optimization’.
Intel’s Tick-Tock strategy has been the bedrock of their microprocessor dominance of the last decade. Throughout the tenure, every other year Intel would upgrade their fabrication plants to be able to produce processors with a smaller feature set, improving die area, power consumption, and slight optimizations of the microarchitecture, and in the years between the upgrades would launch a new set of processors based on a wholly new (sometimes paradigm shifting) microarchitecture for large performance upgrades. However, due to the difficulty of implementing a ‘tick’, the ever decreasing process node size and complexity therein, as reported previously with 14nm and the introduction of Kaby Lake, Intel’s latest filing would suggest that 10nm will follow a similar pattern as 14nm by introducing a third stage to the cadence.
From Intel's report: As part of our R&D efforts, we plan to introduce a new Intel Core microarchitecture for desktops, notebooks (including Ultrabook devices and 2 in 1 systems), and Intel Xeon processors on a regular cadence. We expect to lengthen the amount of time we will utilize our 14nm and our next generation 10nm process technologies, further optimizing our products and process technologies while meeting the yearly market cadence for product introductions.
While the new PAO or ‘Process-Architecture-Optimization’ model is a direct result of the complexity of developing and implementing new lithographic nodes (Intel has even entered into a new five-year agreement with ASML to develop new extreme-ultra-violet lithographic techniques), but also with new process nodes typically comes a time where yields have to become high enough to remain financially viable in the long term. It has been well documented that the complexity of Intel’s 14nm node using the latest generation FinFET technology took longer than expected to reach maturation point compared to 22nm. As a result, product launches were stretched out and within a three-year cycle Intel was starting to produce only two new generations of products.
Intel’s current fabs in Ireland, Arizona, and Oregon are currently producing wafers on the 14nm node, with Israel joining Arizona and Oregon on the 22nm node. Intel also has agreements in place for third-party companies (such as Rockchip) to manufacture Intel’s parts for certain regional activity. As well as looking forward to 10nm, Intel’s filing also states projects in the work to move from 300mm wafers to 450mm wafers, reducing cost, although does not put a time frame on it.
The manufacturing lead Intel has had over the past few years over rivals such as Samsung, TSMC and Global Foundries, has put them in a commanding position in both home computing and enterprise. One could argue that by elongating the next two process nodes, Intel might lose ground on their advantage, especially as other companies start hitting their stride. However, the research gap is still there - Intel introduced 14nm back in August 2014, and has since released parts upwards of 400mm2, whereas Samsung 14nm / TSMC 16nm had to wait until the launch of the iPhone to see 100mm2 parts on the shelves, with Global Foundries still to launch their 14nm parts into products. While this relates to density, both power and performance are still considered to be on Intel’s side, especially for larger dies.
Intel's Current Process Over Time
On the product side of things, Intel’s strategy of keeping the same microarchitecture for two generations allows its business customers to guarantee the lifetime of the halo platform, and maintain consistency with CPU sockets in both consumer and enterprise. Moving to a three stage cycle has thrown some uncertainty on this, depending on how much ‘optimization’ will go into the PAO stage: whether it will be microarchitectural, better voltage and thermal qualities, or if it will be graphics focused, or even if it will keep the same socket/chipset. This has a knock on effect with Intel’s motherboard partners, who have used the updated socket and chipset strategy every two generations as a spike in revenue.
Suggested Reading:
EUV Lithography Makes Good Progress, Still Not Ready for Prime Time
Tick Tock on the Rocks: Intel Delays 10nm, adds 3rd Gen 14nm
The Intel Skylake Mobile and Desktop Launch with Microarchitecture Analysis
Source: Intel 10-K (via The Motley Fool)
98 Comments
View All Comments
Pork@III - Thursday, March 24, 2016 - link
I lived better in times when processors were not yet available for individuals.blzd - Friday, March 25, 2016 - link
Oh no, not a "good ol' days" comment.FunBunny2 - Wednesday, March 23, 2016 - link
there was a time, aka WinTel, when the symbiosis betwixt M$ and Intel was sufficient to keep both obscenely rich. Windoze/Office would barely work on current cpu, but OK on the one about to be released. a cycle source and a cycle sink. since the great demand for Office came from, well offices, both companies had known demand, and growing. if M$ held up its end by bloating the OS and Office on schedule. 99.9995% of what gets done in Word/Excel can be handled by a low-end Pentium, and has, obviously, for years. these days the symbiosis has shifted to gamers, and we know how deep pocketed and rampant they are. well, not so much. thus the SoC-ing of the desktop cpu. and such. we've reached the asymptote of computing. find someplace else to look for rapid growth.willis936 - Wednesday, March 23, 2016 - link
You can spot a crazy ramble when you see "M$" at least twice in four sentences.BrokenCrayons - Wednesday, March 23, 2016 - link
I've just gotten a call from the 1990's. They say they'd like their M$ back.redfirebird15 - Wednesday, March 23, 2016 - link
This was bound to happen eventually, and even if Intel could mass produce 10nm chips right now, the only real benefits would be the supposed power savings. Intel has put too much focus on low-power use cases that the power users who want/need absolute performance are still content with Sandy Bridge, except those who require the most cores available, in which case Haswell-E is available.I get it. We all want to lower power consumption and heat output, especially businesses who have hundreds or thousands of laptops. But us power users are desperate to see real performance gains from archicture and process improvements. Sadly, Sandy Bridge was such a radical jump in performance that Intel had set the bar too high for the next generations.
I jumped from a socket 939 opteron dualcore to the i7 920. Massive improvement. I bought an i7 2500 sandybridge. Best proc ever. I have an ivybridge i7 laptop and now a skylake i3 6100. Day to day, with an ssd in each, they all perform the same.
The power users who need absolute IPC performance are the ones getting screwed with each generation. But that is such a small subset of Intel's sales, i assume they just dont care.
Murloc - Wednesday, March 23, 2016 - link
who are these power users?I mean, people I know who use lots of CPU computing resources just make the simulation run on university shared computing resources and stuff.
redfirebird15 - Wednesday, March 23, 2016 - link
I'm definitely not one, but im sure some folks need the absolute best IPC available for their specific applications. Media professionals i suppose would want the best IPC so they can finish their current project and move on.The way i see it, there are pretty much 4 types of consumers: those who want the absolute lowest cost cpus and dont care about the performance, those with a limited budget who want the best perf/$, those who want the best perf/watt due to power and heat concerns, and those who need the best cpu to augment a specialized application i.e. highest single thread perf or most cores/cpu.
I suppose the power users I'm referring to have a job/hobby where time is valuable, so they must find the best compromise of the above.
Brandon
BrokenCrayons - Wednesday, March 23, 2016 - link
For those citing Moore's Law - Moore's observation wasn't entirely tied to physics and engineering. There are other drivers to consider that are much more closely related to industry economics driven by customer demand and intertwined with software. While many of us get a gleeful little sparkle in our eyes when looking at new hardware, we often forget that the hardware is absolutely not the alpha and omega of computing. In fact, the reason why the hardware exists is to push the software and it's those programs that satisfy a variety of human wants and needs. Software hasn't been a major driver of new hardware adoption for a quite some time now and only demands the purchase of new equipment at a relatively relaxed pace when compared to earlier periods in computing industry history (say, the Win9x era, for example).Intel's lengthening of time on each manufacturing process is as much tied to economic factors as it is to engineering challenges. Credible competitive threats simply don't exist at present in Intel's primary processor markets. New system purchases aren't putting a large pull pressure on their supply chain. Software that requires vast improvements in CPU compute power are slow to emerge. Certainly, new manufacturing processes are becoming more difficult to develop, but we would be remiss if we didn't consider other factors besides physics.
Then again, I still have a Q6600 at stock clocks in my last remaining desktop computer so what the crap do I know about any of this?
cobrax5 - Wednesday, March 23, 2016 - link
I mean, that's true in a general sense, but not absolutely true. Think about how much processing power, bandwidth, storage, etc. it takes to run 4K video. You couldn't do that 10 years ago. Mobile SoC's can do that now. My TV can stream 4K video with it's (probably mobile-based) SoC inside. There have been huge strides in specialized blocks of silicon that are in every CPU/SoC sold now for things like encryption, video encode/decode, virtualization, etc.