Yeah, my friend and I still have Sandy Bridge computers that are not only perfectly serviceable but still provide top-tier performance in all but a few programs that leverage the handful of new instructions introduced in subsequent chips. Sure, there are platform improvements (PCI-E 3.0, USB Type C, NVME and M.2 SSD ports) but nothing that have so far justified a complete platform upgrade. He took his 2500K that he bought day 1 of the Sandy Bridge release, dropped a fat SSD and a new video card in it, and is good to go.
I'm sorry but that just doesn't check out. That's a 16% OC and comparing benchmarks of the 6700 to the 2500k (non OC) the gap varies between 25% and 45% depending on the metric.
CPUBoss is a fake website that yields no real information. The only benchmark they ever actually cite is Passmark. The rest of it is just magical marketing bars where they draw little lines to get you to buy CPU's where they get commissions if you click the links on the site. That site is the #1 shill site on the net for CPU benchmarking.
25%? Not even 10%. Honestly the 25% is while using AVX2/AVX-512 vs using older SSE/AVX. But this doesn't apply to 90+% of applications. In Integer performance its ~3% faster than Sandy as Haswell isn't really faster than Sandy anyway. Sandy -> Ivy was ~5% overall Ivy -> Haswell was ~3% Overall. Haswell -> Broadwell was ~5% overall. Broadwell -> Skylake was 2% overall.
Not sure where that adds upto 25 or 45%. Those benchmarks are cherry picked using the newest intel technologies while being simultaniously de-optimized for older technologies to provide the largest possible bump available. Intel has been involved in many scandals with paying benchmark companies off to make their chips seem better than they are. Why do you think that a 2600k vs a 6700k at the same clockspeed yield the same FPS in virtually EVERY GAME, and yield the same performance in virtually every non-synthetic intel optimized benchmark. Intel has even gone as far as specifically de-optimizing calls when it detects certain "tiers" of CPU's such as Celeron/Pentium so that the performance is artificially lower in order to entice you to get a more expensive chip. Just as the G3258 in benchmarks even with a heavy OC could barely match the speed of an i3 which cannot OC. When in reality in games that use only 1-2 threads there were no real differences in FPS.
As per Intel's CPU comparision, it is a 8.6% increase in base frequency between i5-62000u and i5-72000u. APart from that the only difference is in the intel HD graphics 520 vs 620 HD graphics (non-issue if using 3rd party graphics card)
Um, what's your point? Obviously newer AMD chips are faster than that. I just wanted to point out that even an 8 years old CPU is able to handle current games at 1920 x 1080.
It's true...Instead of getting rid of my old x58 system, I upgraded the RAM to 48gb, slapped a 6 core Xeon 5670 in it and use two TLC SSDs in RAID for a very capable system..I still built a Skylake system only because it had been so long and i wanted to...but older systems are more than capable of handling today's games, etc. with the right video card.
Hello fellow Q9550 owner. Mine is overclocked at 3.9 GHz. For now I have a strange configuration, since my video card is GTX 980 Ti. Believe me or not, my PC runs recent Doom at max settings and 2560x1440 at 130-150 FPS. Yes, its Vulkan API which made CPUs even less relevant for games. DirectX 12 either, but it is still yet to come. I'll probably wait till Skylake-X comes out.
That's quite something and good to know. I'm planning on getting a 1060 or 480. I suppose a Q9550 won't be much of a bottleneck for those cards after all.
Q9550, once overclocked good, will not be a perceptible bottleneck in games built on Vulkan and (probably) DirectX 12, if you use recent video card. Need to be noted, that you have to get your DDR2 memory run on high clock too, from 1.3 GHz, I think, and with good latencies.
I have a few friends still running core 2 quads and they are doing just fine! Hell I have a few friends running Phenom II's still and they are also doing just fine! Most games don't take advantage of new CPU technologies so its really irrelevant as these generational "gains" are mostly with specialized instruction sets such as AVX2 and AVX-512 or FMA, etc. The only useful CPU instruction in modern CPU's is honestly AES where it helps to speed up web browsing and other secure connections, but even then the gains are not that serious and make more of a difference when hosting.
Not funny. I have an old DELL Optiplex 755 MT, bougt in Dec. 2007, with original configuration of Intel Core 2 Duo E6550 + 2x1 GB DDR2 666 MHz, 80GB Seagate S-ATA 7200 RPM HDD and AMD Radeon HD2400Pro PCI-E DDR2 + Windows XP Professional. Now, it the same case, same mainboard, I have Intel Core 2 Quad Q9650, 4x2GB DDR2 800 MHz, 240GB Intel SSD 535 + 1TB Seagate 7200 RPM HDD, Sapphire AMD Radeon HD7750 1GB DDR5 PCI-E and Windows 7 Professional 64-bit, and everything works flawlessly.
120FPS is harder to achieve but that is just due to single thread limitations since alot of rendering is still single thread. Vulkan and DX12 will eliminate those issues.
I can do 4K 60 on a r9 285 and a i7 QED8 engineering sample (Haswell 2.7Ghz) with little to no issues. And that is running DDR3 1333mhz ram too! Ultra settings are just a joke made to force you to buy a new GPU by demanding 3-12x as much GPU power for ~5% increased visual experiences.
Same here. After maxing out the ram, replacing the spinning rust with SSDs, and upgraded to Windows 10, my SB rig gives me no reason to upgrade.
Anandtech did a nice comparison of Sandy Bridge with Skylake when it came out and I'm really hoping they do the same thing with Kaby Lake. With DX11 games only showing a few FPS difference, it wasn't even close to the death of SB they declared. It would be especially interesting now to see how the new generation of GPUs fare with SB in DX12 games. Just how well has DX12 removed the CPU bottleneck?
TBH, if you game at 120Hz on CPU-intensive games, you need a very fast CPU. You need at least an 8-threaded CPU to output a full 1080p frame at under 8.3ms consistently, I've recently found out. :(
Very unlikely. Using the same process means no more transistors for the same size/cost die. So it will be small tweaking of the existing chip so save a bit of power of eek out a bit higher clock. Basically the next generation chip will be very similar to what they are shipping today with slightly different characteristics. All the easy IPC optimizations have already been done.
"Very unlikely. Using the same process means no more transistors for the same size/cost die."
Not necessarily true. Look at the change from Nvidia's Kepler to Maxwell architectures: transistor density increased and power efficiency increased, within the same process node. There is plenty of improvements to be made in designing for a process, and in improvements in implementing a process, in addition to just the process scale.
To be fair, there is a lot more room for new/better architectures in GPUs than CPUs. The transistor density is probably more related to optimized designs with less congestion than anything else
Well if you look at how poor the gains were from Broadwell to Skylake... don't expect gains from Skylake to Kaby-Lake. I mean Broadwell was 14nm and Skylake is barely faster and in some cases actually slower. And Broadwell was just a shrunken and optimized Haswell.
Don't forget processes can also be optimized. It might not be much up front, but it all adds.
I think Intel and all other manufacturers are hitting diminishing returns, but can't find anything else to optimize upon. Or at least, they make it appear like that.
@M B If you want to pay $400 for the next 1X60 card and $300 for the next i3, then you can only hope for AMD going away. You could also hope that scientists will be able really soon to inject functional braincells to someone's head. You really need some of those.
So by your reading it's either agree with the idiotic statement or be a ravening fanbot? Don't mind me while I do neither and ignore you strange individuals.
But...if we look at it from AMD's estimates. They say it'll compete with an i7-5960X. That's 8 Haswell cores @ 3.5GHz. Nothing amazing in terms of IPC. Much more competitive than anything they have now, but it's still a 2016 CPU competing with a "meh IPC" 2014 CPU.
AMD never mentioned clock speed only IPC, Usually that's not by accident. it means that their clock speed will not match intel or their current generation CPUs. So the performance of their ZEN is a big question mark. IPC only means it can be power efficient, it does not mean it will match the speed. I am seeing a repeat of their rx 480 release. As promised good IPC but with clock speed so low it does not even compete against their own older generation GPUs
what i learn from kabylake leaks is that it will have insane clocks, plus IPC imporovement i wont be suprised if it bring 20 to 30% perfomance increase over skylake.
The leaks showed the i7 7700K at 3.6GHz with Turbo to 4.2GHz. That's a full 400MHz lower base frequency in the ES CPUs at least. It might go up but "insane" clocks they will certainly not have.
30% increase in performance over Skylake? That's a nice fantasy... If they could get 30% out of some "optimization" it makes you wonder why would they bother with any process shrinks and new architectures.
Intel has announced nothing but tiny improvements to the video unit in the GPU and rumors report slight clock speed bumps. That's probably all Kaby Lake is, a new name for the same thing to please the OEMs to please us with ever "new" shiny toys.
TBH, I just want better motherboards at launch. Some of the Z170s on launch day had embarrassingly bad UEFIs. I didn't know motherboard design was changing so drastically....that they haven't figured out how to launch it well every damn time. Aren't they bored over at Gigabyte and ASUS and MSI? Motherboards have barely changed in a decade.
I don't want to sound negative but i doubt it. If there are huge IPC improvements Intel marketing would have announce this long time ago. I tried google any update on IPC improvements with Kaby lake and so far I found none other than a few minor additional features (USB 3.1).
I do hope that there is IPC improvements and even there are any it shouldn't be as much as from Broadwell to Skylake since it is already improved given the same process node.
I think PPro/PII/PIII/PM had 9 generations (P6, Klamath, Deschutes, Katmai, Coppermine, Tualatin,Banias, Dothan). Netburst had 5 (Willimette, Northwood, Prescott, Prescott-2m, Smithfield), but failed to scaled. Intel went back to P6 architecture (Pro/PII/PIII/PM), modified it heavily and introduced the core architecture, which is now in it's 7th iteration (it's hard to list code names here because they were so different between I-3, I-5, I-7 and whether tit was Desktop or Mobile.).
Once we throw out the Netburst setbacks, we see a very steady evolution of a singular foundation spanning 20 years..
Actually, the P6 core lasted only up to Nehalem/Lynnfield. Intel radically redid it from scratch with the Sandy Bridge generation - that would incidentally be why Sandy Bridge is such a landmark chip that refuses to die.
I was only stating that the Core / Core 2 Duo architecture used the P6 Core as a starting point, as opposed to the Netburst Core as a foundation.
Even Conroe was all new, as the first iteration of the Core architecture. Interesting products like Pentium-M demonstrated for Intel it was better to trash Netburst as it was running out of steam and re-visit P6 as a starting point for future architectures. But Conroe was not P6... it was Core.
But yeah, Sandy Bridge was a huge step change in the development of the Core architecture.
Conroe is somewhat interesting.. most of it was Yonah + SSE3/SSSE3, which was itself more or less just Dothan*2. In terms of relationships, Conroe is just far, far closer to P6 than SNB is to Nehalem or Westmere.
"The delays in moving to 22 nm and then 14 nm meant that they were missing the anticipated product launches for their OEMs, which left the OEMs with quarters where they would have no new products to sell."
I wonder why they are still doing one year product cycles.
Because OEMs still insist on selling computers like it's 1999. Same reason you see AMD and Nvidia rebranding their existing GPUs on parts destined only for OEMs.
Yep. OEMs want bigger numbers for the Back To School season, so you get a bunch of rebrand/clock bump announcements in June/July/August. It's always been silly, even more so now.
At this point I`m more interested in Apollo Lake and what OEMs make with it. That i3-380um/atrocious lenovo display combo is almost five years old now, time to upgrade.
Intel should really slow down, i still have my 3770k ivy bridge OCed to 4.5Ghz, i see no reason to upgrade, the CPU is still giving a top tier performance, great in all applications and in games it is like 50% utilized.
Good for you, you probably should invest in a better GPU then. Other people have compute need, a lot of the software I use at work are bottlenecked by single thread performance and throwing additional cores at it doesn't help. Every time intel increases performance by 10% can save my employees many minutes per operation (working on datasets in the gigabytes).
And, actually, if you game at 120Hz, you need at least 8 very high IPC threads. Very very few CPUs can consistently render under 8.3ms consistently. :(
You also need a rare and expensive or non existent (4k, 27", 120Hz, non tn...... would be the minimum and if since dreaming never killed anyone what you'd really want is something that covers your entire field of view so either an extremely large screen or some kind of light head set, has a contrast ratio larger than the average human eye (measured at to neighbouring pixels on the same frame), can display a color range that is larger and denser than average human perception (64 bits per subpixel, floating point), 600Hz, higher than human dpi, can track eye movement (you so you can actually make the image focus change as it does in rl, ..... (you need more than average human since some people are obviously above that and you want to cover them too), a gpu capable of rendering this and a cpu that makes something capable of 8 very high ipc threads look like an 8 bit microcontroller vs anything we have now. I mean if we are going for fast let's go all the way. Until we have this, real ai, and much more all running in more than one instance and all powered by a small and quiet machine that can run for days without being connected to the power grid (so either much lower power consumption, much better batteries, porable fusion reactors or a combination of the above) and so on we don't have enough close to enough tech..
This just means that you made a smart purchasing decision and that you aren't running the right sw. Try transcoding the 10TB of video I have from a wide variaty of formats/resolutions/bitrates to something more standardized and smaller while retaining quality. Ideally one no holds bared h265 version upscaled to 4k with madvr like enchantments that requires great hw to play, one smaller and easier to decode h265 copy with a few less audio streams for streaming and one even smaller h264 version with a lower resolution and 2 channel audio for playback on shitty devices or over slow networks.
Since I am not a Semi related engineer, I keep wondering if the graphics side of the cpu could be replaced by heat sink material. The obvious advantage is to provide a heat sink for the thermal burst nature of the processor and hopefully allow more heat conduction surface area for thermal heat flow. Obviously, the chip size has to be same for cost/wafer, but it seems to me that this could allow higher sustained clock speeds which is the path to processor increased IPC. I would appreciate comments from the more knowledgeable people on the issue I am missing because this is so obvious there must be a reason. I assume that there is a minimum chip size just for thermal considerations. I also wonder if Intel assumes that people interested in performance will not use the IG portion of their chip and it is in reality a heat sink of sorts which allows higher IPC.
Well given that the IGP sits besides the CPU cores, "cutting it off" and replacing it with dead silicon/heatsink is not a very efficient thing to do, since heat transfer laterally will be very slow.
But you are not actually hitting completly in the dark. Most modern core designs feature some "dead" silicon around parts that will heat up the most (ALUs/FPUs/vector units/etc) for the very reason you mentioned. that little bit of silicon will both cushion the burst workloads, as well as protect the neighbouring units from the heat produced by the hard working areas.
Also you are confusing IPC (instructions per clock) with (M)IPS, (mega) instructions per second. IPC will stay the same, for a given workload, no matter the clockspeed (for example, the maximum IPC for a modern CPU will be around 2 for spamfiltering and such workloads, but much lower, around 0.8 for HPC). Whereas MIPS is a function of IPC and clockspeed, basically being the amount of instructions per clock (IPC number) times the number of clocks per second (the GHz).
I think people who expect more or much faster cores, will get disappointed again with a +5%.
It seems - from things that I read - that Intel will be putting more resources on the iGPU (again) and also we will see the company calling the 4 core cups (for example) as 4+2 cpus with the 2 being the GPU cores (something like AMD's 4+8 = 12 compute cores on the APUs). Probably DX12 and multi adapter are playing a role here.
I'm not even sure it will hit a 5% gain. I think that 'optimization' in this case means mostly bug fixes and removing redundant design for better power usage. Look at how many OEMs had major sleep/resume issues this last year. I think those OEMs would be quite happy to see those problems addressed and then similar performance in a slightly lower wattage package.
There are a couple of possibilities that come to mind. In no particular order:
1. Kaby Lake is *just* a clock speed bump of Skylake.
I'll take more performance, but this is probably the least interesting way to do it. In this scenario, instead of releasing new Skylake SKUs, they'd just release new Kaby Lake SKUs instead. I saw an earlier post say that the KBL ES chips had *lower* clock speeds than SKL, so maybe (hopefully?) this isn't the likely scenario.
2. Kaby Lake is to Skylake what Haswell Refresh was to Haswell.
This might mean the strategy is now: New Process Node -> New Architecture -> Better Packaging for higher clocks
Better TIM and more robust onboard voltage regulation would still be a boon to enthusiasts. Unfortunately, I don't see how improved packaging would benefit OEMs and it'd increase costs for Intel. Plus, anything targeting mobile wouldn't have a heatspreader to optimize anyway, and with SKL no longer using a FIVR I'm not sure what Intel could change on the CPU's voltage regulation side without also changing their board power delivery spec.
3. Kaby Lake is a new stepping of Skylake.
This would be less costly and less time consuming than really designing a new architecture. They might fix errata, optimize silicon, or make other desired changes that didn't make it into Skylake due to deadlines or budget.
4. A combination of 2 & 3 would be interesting, but since there really isn't any competition in this space and would involve 2 completely separate vectors of improvements, I'm pretty comfortable saying that this is just a pipe dream.
Oh, and if KBL brings with it an improved chipset, that'd help add value as well, regardless of what Intel chooses to do to the CPU.
The rumored increase to connectivity - 24 PCI-E lanes on the PCH - would be good for increased M.2 connectivity or the now (slightly) more common 10GbE port.
Still waiting on a mobile quad core with iris in a laptop. Months after they already released I7 6770HQ!!. I guess now I need to wait for Kaby Lake. Now it is making sense why there were no quad laptops with IRIS ( no dGPU).
Intel has been known to pursue botched CPU Designs for years, almost half a decade was wasted on Pentium 4 before they gave up and admitted that Pentium 3 was better. We might all be owners of botched Intel chips. When AMD's Zen is released, we will find out ...
I wouldn't call Sandy Bridge and successors botched... the current situation is closer to the situation just before the first Athlons were released. Intel had the best chips and knew it so the didn't really bother with huge improvements. The Athlon woke them up and made them resume R&D. Now the P4 ultimately proved a failure since it was at best competitive with Amd and any improvements from the later versions were beaten again by the Athlon 64. That's when they realized that the P4 just isn't going to work, found that they actually had a relatively good design (Pentium M), improved it and finally beat AMD.
Hopefully AMD pulls another Athlon since that's the only way you'll see any real effort from Intel to improve their desktop performance.
Any rumors or claims from Intel on IPC improvement, clockpseed, L4 caches, or TDP on the unlocked parts (they're calling them 'S' instead of 'K' now I've heard..'S' used to be the lower TDP locked ones?)?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
92 Comments
Back to Article
CloudWiz - Wednesday, July 20, 2016 - link
Hopefully it brings a larger IPC improvement than Skylake to Broadwell did.Also hoping for far better graphics in this chip.
VoraciousGorak - Thursday, July 21, 2016 - link
Yeah, my friend and I still have Sandy Bridge computers that are not only perfectly serviceable but still provide top-tier performance in all but a few programs that leverage the handful of new instructions introduced in subsequent chips. Sure, there are platform improvements (PCI-E 3.0, USB Type C, NVME and M.2 SSD ports) but nothing that have so far justified a complete platform upgrade. He took his 2500K that he bought day 1 of the Sandy Bridge release, dropped a fat SSD and a new video card in it, and is good to go.Mikuni - Thursday, July 21, 2016 - link
Yeah, still got my 5+y old SB i5-2500k @ 4.3ghz, faster than current gen's non-K i7, barely any reasons to upgrade.willis936 - Thursday, July 21, 2016 - link
I'm sorry but that just doesn't check out. That's a 16% OC and comparing benchmarks of the 6700 to the 2500k (non OC) the gap varies between 25% and 45% depending on the metric.ikjadoon - Thursday, July 21, 2016 - link
True. Clock for clock, Skylake is, at best, 25% faster than Sandy Bridge (where are you getting 45%?). So, a 4GHz Skylake is like a 5GHz Sandy Bridge.extide - Thursday, July 21, 2016 - link
He's not talking clock for clock. He is talking the perf of an oc'd 6700K vs that dudes 4.3Ghz SB i5willis936 - Saturday, July 23, 2016 - link
Nope. Check benches on cpuboss. Without normalizing frequency the stock 6700k is at worst 25% faster than a 4.3 GHz 2500k.Jimster480 - Sunday, July 24, 2016 - link
CPUBoss is a fake website that yields no real information.The only benchmark they ever actually cite is Passmark.
The rest of it is just magical marketing bars where they draw little lines to get you to buy CPU's where they get commissions if you click the links on the site.
That site is the #1 shill site on the net for CPU benchmarking.
Jimster480 - Sunday, July 24, 2016 - link
25%?Not even 10%.
Honestly the 25% is while using AVX2/AVX-512 vs using older SSE/AVX.
But this doesn't apply to 90+% of applications.
In Integer performance its ~3% faster than Sandy as Haswell isn't really faster than Sandy anyway.
Sandy -> Ivy was ~5% overall
Ivy -> Haswell was ~3% Overall.
Haswell -> Broadwell was ~5% overall.
Broadwell -> Skylake was 2% overall.
Not sure where that adds upto 25 or 45%.
Those benchmarks are cherry picked using the newest intel technologies while being simultaniously de-optimized for older technologies to provide the largest possible bump available.
Intel has been involved in many scandals with paying benchmark companies off to make their chips seem better than they are.
Why do you think that a 2600k vs a 6700k at the same clockspeed yield the same FPS in virtually EVERY GAME, and yield the same performance in virtually every non-synthetic intel optimized benchmark.
Intel has even gone as far as specifically de-optimizing calls when it detects certain "tiers" of CPU's such as Celeron/Pentium so that the performance is artificially lower in order to entice you to get a more expensive chip.
Just as the G3258 in benchmarks even with a heavy OC could barely match the speed of an i3 which cannot OC. When in reality in games that use only 1-2 threads there were no real differences in FPS.
evilpaul666 - Sunday, July 24, 2016 - link
Using those percentages Sandy -> Skylake should be a ~16% increase at the same clock speed.Asubmani - Sunday, November 6, 2016 - link
As per Intel's CPU comparision, it is a 8.6% increase in base frequency between i5-62000u and i5-72000u. APart from that the only difference is in the intel HD graphics 520 vs 620 HD graphics (non-issue if using 3rd party graphics card)tamalero - Tuesday, July 26, 2016 - link
for the game part.. Id say it means the game is video card starved, not cpu starved.user7624312 - Wednesday, August 10, 2016 - link
AVX-512? Please enlighten me how you would use that on any current Skylake processor.serendip - Thursday, July 21, 2016 - link
I've got a Core 2 Duo laptop that I've kitted out over the years with lots of RAM and a fast SSD. Still good to go :)For office and typical home users, I could see the same desktops being used for 5 years or more. Bad news for Intel then.
eddman - Thursday, July 21, 2016 - link
This might sound funny, but I'm using a Core 2 Quad Q9550 @ 3.4 GHz and it runs pretty much everything at 1920 x 1080 at 40-70 FPS.CPUs became "good enough" a long while ago.
extide - Thursday, July 21, 2016 - link
Good enough, yes, but a core 2 quad is even slower than the AMD cores on the market right now, so yeah, there is that.eddman - Thursday, July 21, 2016 - link
Um, what's your point? Obviously newer AMD chips are faster than that. I just wanted to point out that even an 8 years old CPU is able to handle current games at 1920 x 1080.ACE76 - Thursday, July 21, 2016 - link
It's true...Instead of getting rid of my old x58 system, I upgraded the RAM to 48gb, slapped a 6 core Xeon 5670 in it and use two TLC SSDs in RAID for a very capable system..I still built a Skylake system only because it had been so long and i wanted to...but older systems are more than capable of handling today's games, etc. with the right video card.Jimster480 - Sunday, July 24, 2016 - link
Depends on what you are doing, but yes you are right.And that is the reality of the fact that FX is actually more than enough for 90% of cases.
abstraction - Thursday, July 21, 2016 - link
Hello fellow Q9550 owner. Mine is overclocked at 3.9 GHz. For now I have a strange configuration, since my video card is GTX 980 Ti. Believe me or not, my PC runs recent Doom at max settings and 2560x1440 at 130-150 FPS. Yes, its Vulkan API which made CPUs even less relevant for games. DirectX 12 either, but it is still yet to come. I'll probably wait till Skylake-X comes out.eddman - Thursday, July 21, 2016 - link
That's quite something and good to know. I'm planning on getting a 1060 or 480. I suppose a Q9550 won't be much of a bottleneck for those cards after all.abstraction - Friday, July 22, 2016 - link
Q9550, once overclocked good, will not be a perceptible bottleneck in games built on Vulkan and (probably) DirectX 12, if you use recent video card. Need to be noted, that you have to get your DDR2 memory run on high clock too, from 1.3 GHz, I think, and with good latencies.Jimster480 - Sunday, July 24, 2016 - link
I have a few friends still running core 2 quads and they are doing just fine!Hell I have a few friends running Phenom II's still and they are also doing just fine!
Most games don't take advantage of new CPU technologies so its really irrelevant as these generational "gains" are mostly with specialized instruction sets such as AVX2 and AVX-512 or FMA, etc.
The only useful CPU instruction in modern CPU's is honestly AES where it helps to speed up web browsing and other secure connections, but even then the gains are not that serious and make more of a difference when hosting.
Panoramix0903 - Friday, July 22, 2016 - link
Not funny. I have an old DELL Optiplex 755 MT, bougt in Dec. 2007, with original configuration of Intel Core 2 Duo E6550 + 2x1 GB DDR2 666 MHz, 80GB Seagate S-ATA 7200 RPM HDD and AMD Radeon HD2400Pro PCI-E DDR2 + Windows XP Professional. Now, it the same case, same mainboard, I have Intel Core 2 Quad Q9650, 4x2GB DDR2 800 MHz, 240GB Intel SSD 535 + 1TB Seagate 7200 RPM HDD, Sapphire AMD Radeon HD7750 1GB DDR5 PCI-E and Windows 7 Professional 64-bit, and everything works flawlessly.Jimster480 - Sunday, July 24, 2016 - link
Yep and I am sure you can upgrade that GPU to a RX480 and still be fine for a few more years or until the mobo craps out.willis936 - Saturday, July 23, 2016 - link
if 1080p60 is what you're aiming for yeah it's enough. If you want 1440p120 even the high end today cuts it but not by a wide margain.Jimster480 - Sunday, July 24, 2016 - link
120FPS is harder to achieve but that is just due to single thread limitations since alot of rendering is still single thread.Vulkan and DX12 will eliminate those issues.
I can do 4K 60 on a r9 285 and a i7 QED8 engineering sample (Haswell 2.7Ghz) with little to no issues. And that is running DDR3 1333mhz ram too!
Ultra settings are just a joke made to force you to buy a new GPU by demanding 3-12x as much GPU power for ~5% increased visual experiences.
patel21 - Thursday, July 28, 2016 - link
My Q600 is still rocking too :-)Mr Perfect - Thursday, July 21, 2016 - link
Same here. After maxing out the ram, replacing the spinning rust with SSDs, and upgraded to Windows 10, my SB rig gives me no reason to upgrade.Anandtech did a nice comparison of Sandy Bridge with Skylake when it came out and I'm really hoping they do the same thing with Kaby Lake. With DX11 games only showing a few FPS difference, it wasn't even close to the death of SB they declared. It would be especially interesting now to see how the new generation of GPUs fare with SB in DX12 games. Just how well has DX12 removed the CPU bottleneck?
ikjadoon - Thursday, July 21, 2016 - link
TBH, if you game at 120Hz on CPU-intensive games, you need a very fast CPU. You need at least an 8-threaded CPU to output a full 1080p frame at under 8.3ms consistently, I've recently found out. :(spikebike - Thursday, July 21, 2016 - link
Very unlikely. Using the same process means no more transistors for the same size/cost die. So it will be small tweaking of the existing chip so save a bit of power of eek out a bit higher clock. Basically the next generation chip will be very similar to what they are shipping today with slightly different characteristics. All the easy IPC optimizations have already been done.edzieba - Thursday, July 21, 2016 - link
"Very unlikely. Using the same process means no more transistors for the same size/cost die."Not necessarily true. Look at the change from Nvidia's Kepler to Maxwell architectures: transistor density increased and power efficiency increased, within the same process node. There is plenty of improvements to be made in designing for a process, and in improvements in implementing a process, in addition to just the process scale.
ishould - Thursday, July 21, 2016 - link
To be fair, there is a lot more room for new/better architectures in GPUs than CPUs. The transistor density is probably more related to optimized designs with less congestion than anything elseJimster480 - Sunday, July 24, 2016 - link
Well if you look at how poor the gains were from Broadwell to Skylake... don't expect gains from Skylake to Kaby-Lake.I mean Broadwell was 14nm and Skylake is barely faster and in some cases actually slower.
And Broadwell was just a shrunken and optimized Haswell.
YukaKun - Thursday, July 21, 2016 - link
Don't forget processes can also be optimized. It might not be much up front, but it all adds.I think Intel and all other manufacturers are hitting diminishing returns, but can't find anything else to optimize upon. Or at least, they make it appear like that.
Cheers!
extide - Saturday, July 23, 2016 - link
Skylake 4+2 (Quadcore w/GT2 graphics, most common chip) was like ~100mm^2 -- so they have a lot of room to bump up the die size if they want.nandnandnand - Thursday, July 21, 2016 - link
It could be the least IPC improvement ever seen. 2% even.Zen is the chip to watch. Kaby Lake is just something that allows AMD to not die until it can release Zen.
warreo - Thursday, July 21, 2016 - link
Methinks you did not bother reading thee article...Michael Bay - Thursday, July 21, 2016 - link
Zen is the chip to laugh at.When you`re not laughing at rebrandeons or electric design failures.
close - Thursday, July 21, 2016 - link
You must be new to the internet or computery things.silverblue - Thursday, July 21, 2016 - link
Considering we're talking about central processing units here, graphics processing units are hardly relevant.Off you pop now.
Michael Bay - Thursday, July 21, 2016 - link
AMD fails everywhere, it`s all good.yannigr2 - Thursday, July 21, 2016 - link
@M BIf you want to pay $400 for the next 1X60 card and $300 for the next i3, then you can only hope for AMD going away. You could also hope that scientists will be able really soon to inject functional braincells to someone's head. You really need some of those.
Michael Bay - Thursday, July 21, 2016 - link
Goodwill has its limits.bj_murphy - Monday, July 25, 2016 - link
Or move to Canada, those prices sound about right.JoeMonco - Thursday, July 21, 2016 - link
Oooooooh. Your gonna get the panties of the AMD fangirls all twisted up with that comment.Don't you know that Zen is gonna be the second coming of Jesus?
Spunjji - Thursday, July 21, 2016 - link
So by your reading it's either agree with the idiotic statement or be a ravening fanbot? Don't mind me while I do neither and ignore you strange individuals.ikjadoon - Thursday, July 21, 2016 - link
But...if we look at it from AMD's estimates. They say it'll compete with an i7-5960X. That's 8 Haswell cores @ 3.5GHz. Nothing amazing in terms of IPC. Much more competitive than anything they have now, but it's still a 2016 CPU competing with a "meh IPC" 2014 CPU.sharath.naik - Sunday, July 24, 2016 - link
AMD never mentioned clock speed only IPC, Usually that's not by accident. it means that their clock speed will not match intel or their current generation CPUs. So the performance of their ZEN is a big question mark. IPC only means it can be power efficient, it does not mean it will match the speed.I am seeing a repeat of their rx 480 release. As promised good IPC but with clock speed so low it does not even compete against their own older generation GPUs
Michael Bay - Friday, July 22, 2016 - link
Of course it will, and we all will repent!11!!!tamalero - Tuesday, July 26, 2016 - link
"rebradeons". Which ironically is done by both companies.hemedans - Thursday, July 21, 2016 - link
what i learn from kabylake leaks is that it will have insane clocks, plus IPC imporovement i wont be suprised if it bring 20 to 30% perfomance increase over skylake.close - Thursday, July 21, 2016 - link
The leaks showed the i7 7700K at 3.6GHz with Turbo to 4.2GHz. That's a full 400MHz lower base frequency in the ES CPUs at least. It might go up but "insane" clocks they will certainly not have.30% increase in performance over Skylake? That's a nice fantasy... If they could get 30% out of some "optimization" it makes you wonder why would they bother with any process shrinks and new architectures.
Spunjji - Thursday, July 21, 2016 - link
*citation neededMrSpadge - Thursday, July 21, 2016 - link
Intel has announced nothing but tiny improvements to the video unit in the GPU and rumors report slight clock speed bumps. That's probably all Kaby Lake is, a new name for the same thing to please the OEMs to please us with ever "new" shiny toys.ikjadoon - Thursday, July 21, 2016 - link
TBH, I just want better motherboards at launch. Some of the Z170s on launch day had embarrassingly bad UEFIs. I didn't know motherboard design was changing so drastically....that they haven't figured out how to launch it well every damn time. Aren't they bored over at Gigabyte and ASUS and MSI? Motherboards have barely changed in a decade.Cliff34 - Tuesday, July 26, 2016 - link
I don't want to sound negative but i doubt it. If there are huge IPC improvements Intel marketing would have announce this long time ago. I tried google any update on IPC improvements with Kaby lake and so far I found none other than a few minor additional features (USB 3.1).I do hope that there is IPC improvements and even there are any it shouldn't be as much as from Broadwell to Skylake since it is already improved given the same process node.
HardwareDufus - Thursday, July 21, 2016 - link
I think PPro/PII/PIII/PM had 9 generations (P6, Klamath, Deschutes, Katmai, Coppermine, Tualatin,Banias, Dothan). Netburst had 5 (Willimette, Northwood, Prescott, Prescott-2m, Smithfield), but failed to scaled. Intel went back to P6 architecture (Pro/PII/PIII/PM), modified it heavily and introduced the core architecture, which is now in it's 7th iteration (it's hard to list code names here because they were so different between I-3, I-5, I-7 and whether tit was Desktop or Mobile.).Once we throw out the Netburst setbacks, we see a very steady evolution of a singular foundation spanning 20 years..
HardwareDufus - Thursday, July 21, 2016 - link
I might do one last Desktop with this chip... Depending how good the IGP is and what it's capable of doing in hardware...ZeDestructor - Thursday, July 21, 2016 - link
Actually, the P6 core lasted only up to Nehalem/Lynnfield. Intel radically redid it from scratch with the Sandy Bridge generation - that would incidentally be why Sandy Bridge is such a landmark chip that refuses to die.HardwareDufus - Thursday, July 21, 2016 - link
I was only stating that the Core / Core 2 Duo architecture used the P6 Core as a starting point, as opposed to the Netburst Core as a foundation.Even Conroe was all new, as the first iteration of the Core architecture. Interesting products like Pentium-M demonstrated for Intel it was better to trash Netburst as it was running out of steam and re-visit P6 as a starting point for future architectures. But Conroe was not P6... it was Core.
But yeah, Sandy Bridge was a huge step change in the development of the Core architecture.
ZeDestructor - Thursday, July 21, 2016 - link
Conroe is somewhat interesting.. most of it was Yonah + SSE3/SSSE3, which was itself more or less just Dothan*2. In terms of relationships, Conroe is just far, far closer to P6 than SNB is to Nehalem or Westmere.ikjadoon - Thursday, July 21, 2016 - link
I haven't heard these words in a long time....yuhong - Thursday, July 21, 2016 - link
"The delays in moving to 22 nm and then 14 nm meant that they were missing the anticipated product launches for their OEMs, which left the OEMs with quarters where they would have no new products to sell."I wonder why they are still doing one year product cycles.
r3loaded - Thursday, July 21, 2016 - link
Because OEMs still insist on selling computers like it's 1999. Same reason you see AMD and Nvidia rebranding their existing GPUs on parts destined only for OEMs.A5 - Friday, July 22, 2016 - link
Yep. OEMs want bigger numbers for the Back To School season, so you get a bunch of rebrand/clock bump announcements in June/July/August. It's always been silly, even more so now.Michael Bay - Thursday, July 21, 2016 - link
At this point I`m more interested in Apollo Lake and what OEMs make with it. That i3-380um/atrocious lenovo display combo is almost five years old now, time to upgrade.YazX_ - Thursday, July 21, 2016 - link
Intel should really slow down, i still have my 3770k ivy bridge OCed to 4.5Ghz, i see no reason to upgrade, the CPU is still giving a top tier performance, great in all applications and in games it is like 50% utilized.euler007 - Thursday, July 21, 2016 - link
Good for you, you probably should invest in a better GPU then. Other people have compute need, a lot of the software I use at work are bottlenecked by single thread performance and throwing additional cores at it doesn't help. Every time intel increases performance by 10% can save my employees many minutes per operation (working on datasets in the gigabytes).euler007 - Thursday, July 21, 2016 - link
I really need to start proofreading my posts.ikjadoon - Thursday, July 21, 2016 - link
And, actually, if you game at 120Hz, you need at least 8 very high IPC threads. Very very few CPUs can consistently render under 8.3ms consistently. :(someonesomewherelse - Thursday, September 1, 2016 - link
You also need a rare and expensive or non existent (4k, 27", 120Hz, non tn...... would be the minimum and if since dreaming never killed anyone what you'd really want is something that covers your entire field of view so either an extremely large screen or some kind of light head set, has a contrast ratio larger than the average human eye (measured at to neighbouring pixels on the same frame), can display a color range that is larger and denser than average human perception (64 bits per subpixel, floating point), 600Hz, higher than human dpi, can track eye movement (you so you can actually make the image focus change as it does in rl, ..... (you need more than average human since some people are obviously above that and you want to cover them too), a gpu capable of rendering this and a cpu that makes something capable of 8 very high ipc threads look like an 8 bit microcontroller vs anything we have now. I mean if we are going for fast let's go all the way. Until we have this, real ai, and much more all running in more than one instance and all powered by a small and quiet machine that can run for days without being connected to the power grid (so either much lower power consumption, much better batteries, porable fusion reactors or a combination of the above) and so on we don't have enough close to enough tech..someonesomewherelse - Thursday, September 1, 2016 - link
This just means that you made a smart purchasing decision and that you aren't running the right sw. Try transcoding the 10TB of video I have from a wide variaty of formats/resolutions/bitrates to something more standardized and smaller while retaining quality. Ideally one no holds bared h265 version upscaled to 4k with madvr like enchantments that requires great hw to play, one smaller and easier to decode h265 copy with a few less audio streams for streaming and one even smaller h264 version with a lower resolution and 2 channel audio for playback on shitty devices or over slow networks.Then tell me your cpu is fast.
Jefftex11 - Thursday, July 21, 2016 - link
Since I am not a Semi related engineer, I keep wondering if the graphics side of the cpu could be replaced by heat sink material. The obvious advantage is to provide a heat sink for the thermal burst nature of the processor and hopefully allow more heat conduction surface area for thermal heat flow. Obviously, the chip size has to be same for cost/wafer, but it seems to me that this could allow higher sustained clock speeds which is the path to processor increased IPC. I would appreciate comments from the more knowledgeable people on the issue I am missing because this is so obvious there must be a reason. I assume that there is a minimum chip size just for thermal considerations. I also wonder if Intel assumes that people interested in performance will not use the IG portion of their chip and it is in reality a heat sink of sorts which allows higher IPC.LukaP - Thursday, July 21, 2016 - link
Well given that the IGP sits besides the CPU cores, "cutting it off" and replacing it with dead silicon/heatsink is not a very efficient thing to do, since heat transfer laterally will be very slow.But you are not actually hitting completly in the dark. Most modern core designs feature some "dead" silicon around parts that will heat up the most (ALUs/FPUs/vector units/etc) for the very reason you mentioned. that little bit of silicon will both cushion the burst workloads, as well as protect the neighbouring units from the heat produced by the hard working areas.
Also you are confusing IPC (instructions per clock) with (M)IPS, (mega) instructions per second. IPC will stay the same, for a given workload, no matter the clockspeed (for example, the maximum IPC for a modern CPU will be around 2 for spamfiltering and such workloads, but much lower, around 0.8 for HPC). Whereas MIPS is a function of IPC and clockspeed, basically being the amount of instructions per clock (IPC number) times the number of clocks per second (the GHz).
Jefftex11 - Friday, July 22, 2016 - link
Thanks for the reply. You are of course correct on IPC my fault.yannigr2 - Thursday, July 21, 2016 - link
I think people who expect more or much faster cores, will get disappointed again with a +5%.It seems - from things that I read - that Intel will be putting more resources on the iGPU (again) and also we will see the company calling the 4 core cups (for example) as 4+2 cpus with the 2 being the GPU cores (something like AMD's 4+8 = 12 compute cores on the APUs). Probably DX12 and multi adapter are playing a role here.
Any news on that?
CaedenV - Thursday, July 21, 2016 - link
I'm not even sure it will hit a 5% gain. I think that 'optimization' in this case means mostly bug fixes and removing redundant design for better power usage. Look at how many OEMs had major sleep/resume issues this last year. I think those OEMs would be quite happy to see those problems addressed and then similar performance in a slightly lower wattage package.ikjadoon - Thursday, July 21, 2016 - link
Agreed. I think people got "excited" because we never had Broadwell, really.... so Haswell -> Skylake was a 8-10% jump.Pneumothorax - Thursday, July 21, 2016 - link
So Kaby Lake is basically akin to calling a change like 4770k to 4790k a "new generation?" I guess old Intel is copying GPU makers now...MrCommunistGen - Thursday, July 21, 2016 - link
There are a couple of possibilities that come to mind. In no particular order:1. Kaby Lake is *just* a clock speed bump of Skylake.
I'll take more performance, but this is probably the least interesting way to do it. In this scenario, instead of releasing new Skylake SKUs, they'd just release new Kaby Lake SKUs instead. I saw an earlier post say that the KBL ES chips had *lower* clock speeds than SKL, so maybe (hopefully?) this isn't the likely scenario.
2. Kaby Lake is to Skylake what Haswell Refresh was to Haswell.
This might mean the strategy is now:
New Process Node -> New Architecture -> Better Packaging for higher clocks
Better TIM and more robust onboard voltage regulation would still be a boon to enthusiasts. Unfortunately, I don't see how improved packaging would benefit OEMs and it'd increase costs for Intel. Plus, anything targeting mobile wouldn't have a heatspreader to optimize anyway, and with SKL no longer using a FIVR I'm not sure what Intel could change on the CPU's voltage regulation side without also changing their board power delivery spec.
3. Kaby Lake is a new stepping of Skylake.
This would be less costly and less time consuming than really designing a new architecture. They might fix errata, optimize silicon, or make other desired changes that didn't make it into Skylake due to deadlines or budget.
4. A combination of 2 & 3 would be interesting, but since there really isn't any competition in this space and would involve 2 completely separate vectors of improvements, I'm pretty comfortable saying that this is just a pipe dream.
MrCommunistGen - Thursday, July 21, 2016 - link
Oh, and if KBL brings with it an improved chipset, that'd help add value as well, regardless of what Intel chooses to do to the CPU.The rumored increase to connectivity - 24 PCI-E lanes on the PCH - would be good for increased M.2 connectivity or the now (slightly) more common 10GbE port.
ikjadoon - Thursday, July 21, 2016 - link
"Better TIM and more robust onboard voltage regulation would still be a boon to enthusiasts"err, wait. No, we don't want onboard voltage regulation. That was FIVR. That was extra heat on die. We are ostensibly better off without it?
Kakti - Thursday, July 21, 2016 - link
5. Kaby Lake has native support to HDCP 2.26. Kaby Lake has native support for 3d Xpoint
As MrCommunist said, KL will also have additional PCI-E lanes to feed M.2/U2 drives.
I'm building at least one low power KL box for an HTPC and possibly a second for an eventual VR box.
sharath.naik - Thursday, July 21, 2016 - link
Still waiting on a mobile quad core with iris in a laptop. Months after they already released I7 6770HQ!!. I guess now I need to wait for Kaby Lake. Now it is making sense why there were no quad laptops with IRIS ( no dGPU).systemBuilder - Saturday, July 23, 2016 - link
Apple is probably paying them $250M / month to suppress it until MacBook Pro Retina 2016 is released ...LorinT - Thursday, July 21, 2016 - link
Maybe Apple has been so slow to release their Macbook Pro hoping that they could jump to Kaby Lake, and skip Skylake altogether.A5 - Friday, July 22, 2016 - link
I'd wager the main improvements are power and platform related.systemBuilder - Saturday, July 23, 2016 - link
The new process technology is known to insiders as "Tick-Tock-Toe".systemBuilder - Saturday, July 23, 2016 - link
Intel has been known to pursue botched CPU Designs for years, almost half a decade was wasted on Pentium 4 before they gave up and admitted that Pentium 3 was better. We might all be owners of botched Intel chips. When AMD's Zen is released, we will find out ...someonesomewherelse - Thursday, September 1, 2016 - link
I wouldn't call Sandy Bridge and successors botched... the current situation is closer to the situation just before the first Athlons were released. Intel had the best chips and knew it so the didn't really bother with huge improvements. The Athlon woke them up and made them resume R&D. Now the P4 ultimately proved a failure since it was at best competitive with Amd and any improvements from the later versions were beaten again by the Athlon 64. That's when they realized that the P4 just isn't going to work, found that they actually had a relatively good design (Pentium M), improved it and finally beat AMD.Hopefully AMD pulls another Athlon since that's the only way you'll see any real effort from Intel to improve their desktop performance.
evilpaul666 - Sunday, July 24, 2016 - link
Any rumors or claims from Intel on IPC improvement, clockpseed, L4 caches, or TDP on the unlocked parts (they're calling them 'S' instead of 'K' now I've heard..'S' used to be the lower TDP locked ones?)?