on paper this is very nice, but i am not having any issues with current crop of ssds.
how about intel helps design a new sata standard that supports more than 6Gbps, like 50Gbps, so its futureproof, and can put a deathblow to thunderbolt.
You realize that * Thunderbolt is an Intel technology. So they are not looking to kill it * That thunderbolt can rout your entire PCI bus across physical locations (6 m now, with optical cables ~100 m [if memory serves me]) * That said you want SSD interfaces going directly to the PCI bus (not invent another intermediate bus that is built for a technology (spinning disks)) * That direct PCI interfaces for SSDs is where things are going
"...direct PCI interfaces for SSDs is where things are going"
I would like to see this become more common. There's 8Gb/s of spare PCI-E bandwidth on one slot on my machine at the moment.
But what if SSDs advance faster than even PCI-E? I wonder if they could bring the interface even closer to home by allowing NAND chips to plug into memory-like slots on the motherboard (yay easy upgrade path), with the controller integrated into the CPU? The controller should be relatively inobtrusive - how much die area would it take at 22nm? And could some of the operations run efficiently on the main CPU to cut down that die area overhead some more?
As much as fusion IO et al would like it to direct PCI interfaces are certainly not where it's going for this market.
You cant replace them easily when they break (as some are always going to when you have enough), you cant fit that many in a box, you have to rely on a single manufacturer and you're then tied into their software.
None of thats going to change any time soon, so PCI interfaced SSDs will be small scale or for specific projects.
I still fail to understand why we need SATA express and SFF-8639. When one could have ruled them all. Since the main difference between SATA and SAS is one being Half Duplex and SAS being Full Duplex. But the under lying PCI-Express protocol is Full Duplex by design, so why make another SATA express and not just use SFF-8639 ?
And I hope we start with PCI-E 3.0 too, by the time these things arrive there is no point of using the older and slower PCI-E 2.0
Look into SATA-Express. It essentially uses two PCI-E 2.0 lanes for data transfer (16 Gbit/s with 32 Gbit/s when the spec migrates over to PCI-E 3.0). There is some backwards compatibility with SATA too.
Though SATA-Express will likely coexist with Thunderbolt. SATA Express is aimed as an internal storage solution where as Thunderbolt is aimed toward external peripherals (where storage is just one aspect).
I'm curious about the raw depth of ECC in this device. ECC on the internal SRAM is pretty much expected for enterprise grade equipment nowadays. ECC on the DRAM is also expected but I'm wondering how it is implemented. Chances are that the drive doesn't house 9 DRAM chips for traditional 72 bit wide ECC protected bus. ECC on the NAND could be implemented at the block level (576 bit blocks with 512 bit data + 64 bit ECC) but that'd require some custom NAND chips.
As for the indirect tables, I suspect that the need to be able to hold the entire table in DRAM stems from the idea of table having to optimize the copy in NAND. Optimizing here can likely be done without the massive DRAM cache but I suspect that the optimization process would require too many read/writes to the point it'd be detrimental to the drives life span.
I'm saddened by the increasingly enterprise-oriented focus of Intel. Their SSDs have quite a good reputation in consumer circles as providing reliable performance and operation, and their latest product line (the 330 series) definitely has consumer-level pricing. They're currently sitting at $0.78/GB on the 240GB model, which is pretty competitive with the rest of the market.
The nice thing was that Intel is the safe bet; if you don't want to sort through all the other stuff on the market, you can feel pretty safe buying an Intel. Yes, they've had issues, but generally less than other SSD manufacturers. But with pricing like the S3700 is featuring, the days of Intel being competitive in the consumer space may be over...
I'd rather see Intel take a two-tiered approach. By all means, keep putting out the enterprise drives for the high margins, but also keep a toe in the consumer market; they'll get a good deal of sales there based on their reputation alone.
Just because this is an enterprise SSD doesn't mean that Intel is 100% abandoning the consumer market y'know. They can focus on enterprise but still release consumer SSDs.
$235 at launch for a 100GB performance SSD will not seem too bad to the enthusiast "consumer" circle. That will, of course, drop over time, and bring it within the means of even more budget minded enthusiasts. It was not long ago people were shelling out $200-250 for 80GB Intel X-25M / G2s. I still have two in RAID 0 that I just replaced this last weekend with 4x128GB Samsung 830s in RAID 0 (for $70/piece, that's not a bad 512GB [unformatted] setup). My girlfriend's PC is inheriting the G2's. While $235 for 100GB is still on the high end, I'm sure there will be people who will pay that in the consumer market when they launch if they really do solve some of the IO issues (I have noticed quite a few with Windows 8, not so much in Windows 7 remarkably...but Win8 has serious DPC issues to begin with).
Windows 8 has no DPC issues. There are no updated applications that can measure DPC correctly with the deferred timing in the Windows 8 kernel, making it appear to have a constant/high DPC.
Additionally, DPC latency has nothing to do with disk accesses. Disk accesses are not a function of interrupts in the kernel, unlike audio and video.
With the market going to even smaller process sizes and TLC the drives can't take enthusiast use anyway, my SSD life meter tells me my drive is going to die after 3.5 years - and that's after I worn out one in 1.5 years being nasty with it. Right now my C: drive is 83GB... 100GB is maybe cutting it a little short, I'd like at least 150GB, but otherwise yeah this is a drive I could want.
No offense intended but it's totally inaccurate to state that "Intel is the safe bet". They have had issues with their consumer grade SSDs like most other SSD suppliers who rush products to market without proper validation. I would not trust an Intel SSD any more than most of the other drives with few exceptions. Until an SSD company proves their product in fully compatible, reliable, doesn't change size or lose data, or disappear from they system, I'm not buying the hype.
So are you speaking from personal experience with Intel SSDs since you are from the "SHOW ME" state?
I have 4 Intel SSDs (two G2s, two 320s) and have had zero issues with them. I bought four OCZ Vertex 4s a little over a month ago and returned all four of them because of compatibility issues and consistently appearing/disappearing in single and RAID configurations in multiple computer setups. I'd also owned a 64GB OCZ V2 that I've since given away (RMA'd it 3 times it kept dying, didn't care to bother with it after that). I have had zero issues with the Intel SSDs and am hoping to find the same reliability with the 830s I just upgraded to.
Also, if you actually looked / did some research you would find that Intel has had a lot less issues (even though they have had some of the same Sandforce issues as other mfgs) than other companies....sometimes claiming you sit around waiting for someone to "SHOW" you the proof it sounds like you are couch potato who still cares who wins the election because you actually think one is different than the other...and msnbc/cnn/fox/history/discovery/comedy central told you so (just saying, going out and gathering your own empirical information is worth it sometimes).
I'd say Samsung is about on par with Intel if you look at the number of major bugs requiring immediate firmware updates etc. Intel's rep took a bit of a hit when even they couldn't release an entirely bug free Sandforce drive IMO (though it wasn't a surprise).
Not to mention the 8MB bug with their own controller. No product is safe, but Samsung, Intel, Crucial and Plextor seem the safest, with Samsung and Crucial being also very price competitive. But that's just how I see it. :D I have had 2 OCZ drives and not a single problem with either.
Why are you complaining about scenarios that don't exist??
"But with pricing like the S3700 is featuring, the days of Intel being competitive in the consumer space may be over..." THIS DRIVE IS NOT FOR THE CONSUMER SPACE!
"I'd rather see Intel take a two-tiered approach. By all means, keep putting out the enterprise drives for the high margins, but also keep a toe in the consumer market." THIS IS EXACTLY WHAT INTEL HAS RIGHT NOW! And no indication that will change. This doesn't just apply to the SSD space, they've had separate consumer and server CPU lines for decades.
Spending big on a drive with strong endurance, hoping it will last 10 years, doesn't sound like a good idea to me. Reasons:
- other parts of the SSD may fail rather than NAND wear out - performance and price are still developing so rapidly that you probably won't want to use this drive in 5 years anymore anyway - see it lke this: if instead of paying 700$ now you'd go for a smaller drive with less endurance at 350$ you can use that 350$ (+interest) to buy a new drive in 5 years (if your SSD is really worn out by then). This one should be way faster and much bigger than the original drive, providing you much better value for the next 5 years. Plus if the old drive still works you could still use it in a less "enthusiastic" configuration.
So intel is proud that it keeps no user data on the DRAM. But what about /sandforce and Marvell controllers ? Do they use DRAM for caching userdata ? Is this configurable by the OEM ?
As far as I know, everyone in the consumer space but Intel chaches user data in the DRAM and they aren't dodging that either. For normal consumer use, I don't see why that would be any issue either. If you are worried about that last bit of data integrity, get an enterprise solution or a UPS, which should solve the issue, too. :)
The new controller sounds very promising for all of us who have been waiting for a new Intel controller. I would expect Intel's consumer drives to eventually get the same controller and as far as the price concern, I bet most of the price premium is really from HET-MLC NAND vs regular MLC NAND. Regular consumers don't need 10 drive write per day and the drives should be much cheaper with just regular MLC NAND.
Not caching user data writes in DRAM so that you can't lose them when the power goes out is all well and good, but what happens with indirection table updates which will have to happen AT THE SAME TIME and are inextricably linked? Losing an indirection table mapping to new user data that was just written is no less bad than losing the actual user data, because either way you're losing the data.
Intel has two options here... They can either write indirection table updates directly to NAND at the same time as the user data, or they can cache the indirection table updates only in DRAM and then write them to flash later. Obviously the former is the safest option and I presume this is what Intel is doing, but I've never seen anybody mention how they handle protecting the mapping table updates on any SSD, since they can arguably be MORE important than a little bit of user data due to the risk of losing absolutely everything on the drive if the table gets completely out of whack.
There is mention of a large capacitor to allow for writing the cache to NAND in the event of a power failure.
There are a couple of things Intel can do in this event to eliminate the possibility of cache corruption.
First is write though of any immediate change to the indirection tables. The problem of coherence between the cache and NAND would still exist but wouldn't require writing the entire cache to NAND. Making the DRAM cache write through would impact the write/erase cycles of the drive but I'm uncertain of the magnitude in comparison to heavy write IO.
The second option is that if the DRAM is used to create an optimized version of the directory tables for read only purposes, the old table in the NAND would still be valid (unless there needs to be change due to a write). Thus power loss would only lose the optimized table in DRAM but the unoptimized would still be functional in the NAND.
The third option involves optimized tables being written to disk while the unoptimized version is still in use in NAND. The last operation of writing the optimized indirection table to disk would be switching the status of what table is in active use. Thus only the optimized table is put into use after it has successfully been written to NAND. Sudden power failure in this process wouldn't impact the drive.
A fourth idea that comes to mind would be to make a reservation where the next optimized table would exist in NAND. Thus in the event of a sudden power failure, the SSD will use the unoptimized indirection tables but be able to see if anything has been written to the reserved space - it would know if it suffered a power loss and any recovery actions as necessary. This would eat space as the active table, a table being written and space for a future to be written would be 'in use'.
Personally, I don't care if an SSD stores my user data (acknowledged writes, specifically) and/or internal metadata in a DRAM cache as long as it is battery and/or capacitor backed so that cache can be flushed to NAND after a power failure.
I think what I originally intended to say in my first comment was if Intel is not caching user data in DRAM, then what ARE they caching in DRAM that requires the super-capacitors to give them time to write it to NAND? If it isn't user data, then it must be the indirection tables or some other critical internal metadata. This internal metadata is at least as important as the user data itself, so why even make the distinction? The distinction stinks to me as either a marketing ploy or catering to some outdated PHB "requirement" that they need to meet in order to actually sell these drives to some enterprises. I'm not saying it's bad, just odd and probably non-optimal.
It is likely buffering the indirection table writes to reduce the number of NAND writes. Essentially it helps with the drives overall endurance. How much so would be dependent on just how frequently the indirection table is written to.
The other distinction is that they could be hitting a access time limitation by reading the indirection tables from NAND and then reading the data. By caching this in DRAM, the controller can lower access latencies to the NAND itself.
Not storing user data in DRAM still helps - it forces the drive controller to actually operate efficiently instead of just fixing problems with more write cache. The indirection table doesn't change all that fast, so there won't be that much of it to flush out to NAND on power loss, but it's easy to built up a lot of user data in write cache, which requires that much more capacitance to get durably written.
And FYI, many SSDs will acknowledge a write when the data hits NAND durably, but will not guarantee that the corresponding indirection table entry is durably stored, so on power failure some blocks may appear to revert to their old state, from before the synced write took place.
"Not storing user data in DRAM still helps - it forces the drive controller to actually operate efficiently instead of just fixing problems with more write cache." And why should I care how the problem is fixed? Efficient programming or throwing more hardware at the problem is the same thing for 99% of the usage cases. If maybe power consumption is a problem, then one solution might work better than another, but for the most part, a fix is a fix, at least in my book.
How the problem is fixed would matter to enterprise environments where reliability reigns supreme. How an issue is fixed in this area matters in the context of it happening again, just under different circumstances.
In this example, throwing more DRAM as a write cache for SSD's would be appropriate for consumers to address the issue but not necessarily the enterprise market. Keeping data in flash maintains data integrity which matters in scenarios of sudden power failure. The thing is that enterprise markets have a different usage scenario where the large write buffer that resolved the issue for consumers could still an issue at the enterprise level (ie the SSD would need an even larger DRAM buffer).
The only new, and truly innovation in this controller is the actually the software side of thing. 1:1 mapping and basically super fast storage table for updating, deleting by ECC RAM.
Couldn't 70 - 90% of this performance gain be implemented with other controller if they had large enough ECC DRAM?
Please correct me if I'm wrong
And what are the variation of Random I/O in other Enterprise Class SSD like Fusion IO?
To me it sounds like this change requires an entirely different controller design, or at least a checking & rethinking of major parts. Intel surely didn't tell us everything that changed, just the most important result of the changes.
If the encryption is based on SED I would like to see another comparison. Windows 8 and the Bitlocker allow drive encryption like Truecrypt. The difference is that the Bitlocker can use the TPM of the motherboard. With an SSD based on the SED standard the TPM could directly use the hardware encryption of the SSD bypassing the CPU. This should result in significant performance gains compared to standard encryption. Here the penalty of encryption would be interesting. The new Bitlocker also allows to only encrypt the used sectors. This is very important for SSDs because previously the whole drive was encrpted which had negative impact on SSDs - at least without massive overprovisioning.
I don't care about battery life too much. I mostly have my laptop plugged in. I don't think I would have issues with write endurance of a lower priced drive, but I would pay significantly more for higher reliability (no crashes, hangs, or data corruption). I think the consistent response time would be a big plus, especially if the drive was being used for swap space.
Would something like the 100 GB version take too much power? It looks like the max power consumption is only around 3 W for the 100GB version, compared to 0.6 to 0.9 for more other consumer drives. It seems to be selling for more than MSRP ($235) right now though. About $270 is the lowest I have seen for 100 GB version.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
43 Comments
Back to Article
DukeN - Monday, November 5, 2012 - link
Now please give us some results with benchmarks relevant to enterprise users (eg RAID performance, wear levelling vs other enterprise drives).chrone - Monday, November 5, 2012 - link
finally getting more consistent performance over time. nice writing Anand, as always! :)edlee - Monday, November 5, 2012 - link
on paper this is very nice, but i am not having any issues with current crop of ssds.how about intel helps design a new sata standard that supports more than 6Gbps, like 50Gbps, so its futureproof, and can put a deathblow to thunderbolt.
Conficio - Monday, November 5, 2012 - link
You realize that* Thunderbolt is an Intel technology. So they are not looking to kill it
* That thunderbolt can rout your entire PCI bus across physical locations (6 m now, with optical cables ~100 m [if memory serves me])
* That said you want SSD interfaces going directly to the PCI bus (not invent another intermediate bus that is built for a technology (spinning disks))
* That direct PCI interfaces for SSDs is where things are going
dananski - Monday, November 5, 2012 - link
"...direct PCI interfaces for SSDs is where things are going"I would like to see this become more common. There's 8Gb/s of spare PCI-E bandwidth on one slot on my machine at the moment.
But what if SSDs advance faster than even PCI-E? I wonder if they could bring the interface even closer to home by allowing NAND chips to plug into memory-like slots on the motherboard (yay easy upgrade path), with the controller integrated into the CPU? The controller should be relatively inobtrusive - how much die area would it take at 22nm? And could some of the operations run efficiently on the main CPU to cut down that die area overhead some more?
JohnWinterburn - Monday, November 5, 2012 - link
As much as fusion IO et al would like it to direct PCI interfaces are certainly not where it's going for this market.You cant replace them easily when they break (as some are always going to when you have enough), you cant fit that many in a box, you have to rely on a single manufacturer and you're then tied into their software.
None of thats going to change any time soon, so PCI interfaced SSDs will be small scale or for specific projects.
ogreslayer - Monday, November 5, 2012 - link
That is what SATA express and SFF-8639 will be for and was announced a while ago.http://www.anandtech.com/show/6294/breaking-the-sa...
Maybe not 50Gbps but at 4GB/s and providing 32Gbps it isn't a small jump. Even the 2Gbps gen3 connection isn't something to sneeze at.
iwod - Monday, November 5, 2012 - link
I still fail to understand why we need SATA express and SFF-8639. When one could have ruled them all. Since the main difference between SATA and SAS is one being Half Duplex and SAS being Full Duplex. But the under lying PCI-Express protocol is Full Duplex by design, so why make another SATA express and not just use SFF-8639 ?And I hope we start with PCI-E 3.0 too, by the time these things arrive there is no point of using the older and slower PCI-E 2.0
Kevin G - Monday, November 5, 2012 - link
Look into SATA-Express. It essentially uses two PCI-E 2.0 lanes for data transfer (16 Gbit/s with 32 Gbit/s when the spec migrates over to PCI-E 3.0). There is some backwards compatibility with SATA too.Though SATA-Express will likely coexist with Thunderbolt. SATA Express is aimed as an internal storage solution where as Thunderbolt is aimed toward external peripherals (where storage is just one aspect).
Kevin G - Monday, November 5, 2012 - link
I'm curious about the raw depth of ECC in this device. ECC on the internal SRAM is pretty much expected for enterprise grade equipment nowadays. ECC on the DRAM is also expected but I'm wondering how it is implemented. Chances are that the drive doesn't house 9 DRAM chips for traditional 72 bit wide ECC protected bus. ECC on the NAND could be implemented at the block level (576 bit blocks with 512 bit data + 64 bit ECC) but that'd require some custom NAND chips.As for the indirect tables, I suspect that the need to be able to hold the entire table in DRAM stems from the idea of table having to optimize the copy in NAND. Optimizing here can likely be done without the massive DRAM cache but I suspect that the optimization process would require too many read/writes to the point it'd be detrimental to the drives life span.
blackbrrd - Monday, November 5, 2012 - link
Sounds like a huge improvement for databases. The write endurance looks phenomenal!FunBunny2 - Monday, November 5, 2012 - link
What he said!!Guspaz - Monday, November 5, 2012 - link
I'm saddened by the increasingly enterprise-oriented focus of Intel. Their SSDs have quite a good reputation in consumer circles as providing reliable performance and operation, and their latest product line (the 330 series) definitely has consumer-level pricing. They're currently sitting at $0.78/GB on the 240GB model, which is pretty competitive with the rest of the market.The nice thing was that Intel is the safe bet; if you don't want to sort through all the other stuff on the market, you can feel pretty safe buying an Intel. Yes, they've had issues, but generally less than other SSD manufacturers. But with pricing like the S3700 is featuring, the days of Intel being competitive in the consumer space may be over...
I'd rather see Intel take a two-tiered approach. By all means, keep putting out the enterprise drives for the high margins, but also keep a toe in the consumer market; they'll get a good deal of sales there based on their reputation alone.
karasaj - Monday, November 5, 2012 - link
Just because this is an enterprise SSD doesn't mean that Intel is 100% abandoning the consumer market y'know. They can focus on enterprise but still release consumer SSDs.martyrant - Monday, November 5, 2012 - link
$235 at launch for a 100GB performance SSD will not seem too bad to the enthusiast "consumer" circle. That will, of course, drop over time, and bring it within the means of even more budget minded enthusiasts. It was not long ago people were shelling out $200-250 for 80GB Intel X-25M / G2s. I still have two in RAID 0 that I just replaced this last weekend with 4x128GB Samsung 830s in RAID 0 (for $70/piece, that's not a bad 512GB [unformatted] setup). My girlfriend's PC is inheriting the G2's. While $235 for 100GB is still on the high end, I'm sure there will be people who will pay that in the consumer market when they launch if they really do solve some of the IO issues (I have noticed quite a few with Windows 8, not so much in Windows 7 remarkably...but Win8 has serious DPC issues to begin with).Omoronovo - Monday, November 5, 2012 - link
Windows 8 has no DPC issues. There are no updated applications that can measure DPC correctly with the deferred timing in the Windows 8 kernel, making it appear to have a constant/high DPC.Additionally, DPC latency has nothing to do with disk accesses. Disk accesses are not a function of interrupts in the kernel, unlike audio and video.
Kjella - Monday, November 5, 2012 - link
With the market going to even smaller process sizes and TLC the drives can't take enthusiast use anyway, my SSD life meter tells me my drive is going to die after 3.5 years - and that's after I worn out one in 1.5 years being nasty with it. Right now my C: drive is 83GB... 100GB is maybe cutting it a little short, I'd like at least 150GB, but otherwise yeah this is a drive I could want.ExarKun333 - Monday, November 5, 2012 - link
Much of the enterprise offerings end-up trickling down to consumer products. Just be patient. :)Beenthere - Monday, November 5, 2012 - link
No offense intended but it's totally inaccurate to state that "Intel is the safe bet". They have had issues with their consumer grade SSDs like most other SSD suppliers who rush products to market without proper validation. I would not trust an Intel SSD any more than most of the other drives with few exceptions. Until an SSD company proves their product in fully compatible, reliable, doesn't change size or lose data, or disappear from they system, I'm not buying the hype.I'm from Missouri - the SHOW ME state.
martyrant - Monday, November 5, 2012 - link
So are you speaking from personal experience with Intel SSDs since you are from the "SHOW ME" state?I have 4 Intel SSDs (two G2s, two 320s) and have had zero issues with them. I bought four OCZ Vertex 4s a little over a month ago and returned all four of them because of compatibility issues and consistently appearing/disappearing in single and RAID configurations in multiple computer setups. I'd also owned a 64GB OCZ V2 that I've since given away (RMA'd it 3 times it kept dying, didn't care to bother with it after that). I have had zero issues with the Intel SSDs and am hoping to find the same reliability with the 830s I just upgraded to.
Also, if you actually looked / did some research you would find that Intel has had a lot less issues (even though they have had some of the same Sandforce issues as other mfgs) than other companies....sometimes claiming you sit around waiting for someone to "SHOW" you the proof it sounds like you are couch potato who still cares who wins the election because you actually think one is different than the other...and msnbc/cnn/fox/history/discovery/comedy central told you so (just saying, going out and gathering your own empirical information is worth it sometimes).
Impulses - Monday, November 5, 2012 - link
I'd say Samsung is about on par with Intel if you look at the number of major bugs requiring immediate firmware updates etc. Intel's rep took a bit of a hit when even they couldn't release an entirely bug free Sandforce drive IMO (though it wasn't a surprise).Death666Angel - Tuesday, November 6, 2012 - link
Not to mention the 8MB bug with their own controller. No product is safe, but Samsung, Intel, Crucial and Plextor seem the safest, with Samsung and Crucial being also very price competitive. But that's just how I see it. :D I have had 2 OCZ drives and not a single problem with either.Taft12 - Monday, November 5, 2012 - link
Why are you complaining about scenarios that don't exist??"But with pricing like the S3700 is featuring, the days of Intel being competitive in the consumer space may be over..."
THIS DRIVE IS NOT FOR THE CONSUMER SPACE!
"I'd rather see Intel take a two-tiered approach. By all means, keep putting out the enterprise drives for the high margins, but also keep a toe in the consumer market."
THIS IS EXACTLY WHAT INTEL HAS RIGHT NOW! And no indication that will change. This doesn't just apply to the SSD space, they've had separate consumer and server CPU lines for decades.
chrnochime - Monday, November 5, 2012 - link
Or just pay more for enterprise. Not like it isn't going to keep dropping in price anyway.philipma1957 - Monday, November 5, 2012 - link
My usage would take 10 or more years to kill this ssd. 800 gb would be pricey but a 400gb for 700 on a sale would be very tempting.MrSpadge - Tuesday, November 6, 2012 - link
Spending big on a drive with strong endurance, hoping it will last 10 years, doesn't sound like a good idea to me. Reasons:- other parts of the SSD may fail rather than NAND wear out
- performance and price are still developing so rapidly that you probably won't want to use this drive in 5 years anymore anyway
- see it lke this: if instead of paying 700$ now you'd go for a smaller drive with less endurance at 350$ you can use that 350$ (+interest) to buy a new drive in 5 years (if your SSD is really worn out by then). This one should be way faster and much bigger than the original drive, providing you much better value for the next 5 years. Plus if the old drive still works you could still use it in a less "enthusiastic" configuration.
mayankleoboy1 - Monday, November 5, 2012 - link
So intel is proud that it keeps no user data on the DRAM.But what about /sandforce and Marvell controllers ? Do they use DRAM for caching userdata ?
Is this configurable by the OEM ?
Death666Angel - Tuesday, November 6, 2012 - link
As far as I know, everyone in the consumer space but Intel chaches user data in the DRAM and they aren't dodging that either. For normal consumer use, I don't see why that would be any issue either. If you are worried about that last bit of data integrity, get an enterprise solution or a UPS, which should solve the issue, too. :)kaix2 - Monday, November 5, 2012 - link
The new controller sounds very promising for all of us who have been waiting for a new Intel controller. I would expect Intel's consumer drives to eventually get the same controller and as far as the price concern, I bet most of the price premium is really from HET-MLC NAND vs regular MLC NAND. Regular consumers don't need 10 drive write per day and the drives should be much cheaper with just regular MLC NAND.cdillon - Monday, November 5, 2012 - link
Not caching user data writes in DRAM so that you can't lose them when the power goes out is all well and good, but what happens with indirection table updates which will have to happen AT THE SAME TIME and are inextricably linked? Losing an indirection table mapping to new user data that was just written is no less bad than losing the actual user data, because either way you're losing the data.Intel has two options here... They can either write indirection table updates directly to NAND at the same time as the user data, or they can cache the indirection table updates only in DRAM and then write them to flash later. Obviously the former is the safest option and I presume this is what Intel is doing, but I've never seen anybody mention how they handle protecting the mapping table updates on any SSD, since they can arguably be MORE important than a little bit of user data due to the risk of losing absolutely everything on the drive if the table gets completely out of whack.
Kevin G - Monday, November 5, 2012 - link
There is mention of a large capacitor to allow for writing the cache to NAND in the event of a power failure.There are a couple of things Intel can do in this event to eliminate the possibility of cache corruption.
First is write though of any immediate change to the indirection tables. The problem of coherence between the cache and NAND would still exist but wouldn't require writing the entire cache to NAND. Making the DRAM cache write through would impact the write/erase cycles of the drive but I'm uncertain of the magnitude in comparison to heavy write IO.
The second option is that if the DRAM is used to create an optimized version of the directory tables for read only purposes, the old table in the NAND would still be valid (unless there needs to be change due to a write). Thus power loss would only lose the optimized table in DRAM but the unoptimized would still be functional in the NAND.
The third option involves optimized tables being written to disk while the unoptimized version is still in use in NAND. The last operation of writing the optimized indirection table to disk would be switching the status of what table is in active use. Thus only the optimized table is put into use after it has successfully been written to NAND. Sudden power failure in this process wouldn't impact the drive.
A fourth idea that comes to mind would be to make a reservation where the next optimized table would exist in NAND. Thus in the event of a sudden power failure, the SSD will use the unoptimized indirection tables but be able to see if anything has been written to the reserved space - it would know if it suffered a power loss and any recovery actions as necessary. This would eat space as the active table, a table being written and space for a future to be written would be 'in use'.
cdillon - Monday, November 5, 2012 - link
Personally, I don't care if an SSD stores my user data (acknowledged writes, specifically) and/or internal metadata in a DRAM cache as long as it is battery and/or capacitor backed so that cache can be flushed to NAND after a power failure.I think what I originally intended to say in my first comment was if Intel is not caching user data in DRAM, then what ARE they caching in DRAM that requires the super-capacitors to give them time to write it to NAND? If it isn't user data, then it must be the indirection tables or some other critical internal metadata. This internal metadata is at least as important as the user data itself, so why even make the distinction? The distinction stinks to me as either a marketing ploy or catering to some outdated PHB "requirement" that they need to meet in order to actually sell these drives to some enterprises. I'm not saying it's bad, just odd and probably non-optimal.
Kevin G - Monday, November 5, 2012 - link
It is likely buffering the indirection table writes to reduce the number of NAND writes. Essentially it helps with the drives overall endurance. How much so would be dependent on just how frequently the indirection table is written to.The other distinction is that they could be hitting a access time limitation by reading the indirection tables from NAND and then reading the data. By caching this in DRAM, the controller can lower access latencies to the NAND itself.
nexox - Monday, November 5, 2012 - link
Not storing user data in DRAM still helps - it forces the drive controller to actually operate efficiently instead of just fixing problems with more write cache. The indirection table doesn't change all that fast, so there won't be that much of it to flush out to NAND on power loss, but it's easy to built up a lot of user data in write cache, which requires that much more capacitance to get durably written.And FYI, many SSDs will acknowledge a write when the data hits NAND durably, but will not guarantee that the corresponding indirection table entry is durably stored, so on power failure some blocks may appear to revert to their old state, from before the synced write took place.
Death666Angel - Tuesday, November 6, 2012 - link
"Not storing user data in DRAM still helps - it forces the drive controller to actually operate efficiently instead of just fixing problems with more write cache."And why should I care how the problem is fixed?
Efficient programming or throwing more hardware at the problem is the same thing for 99% of the usage cases. If maybe power consumption is a problem, then one solution might work better than another, but for the most part, a fix is a fix, at least in my book.
Kevin G - Tuesday, November 6, 2012 - link
How the problem is fixed would matter to enterprise environments where reliability reigns supreme. How an issue is fixed in this area matters in the context of it happening again, just under different circumstances.In this example, throwing more DRAM as a write cache for SSD's would be appropriate for consumers to address the issue but not necessarily the enterprise market. Keeping data in flash maintains data integrity which matters in scenarios of sudden power failure. The thing is that enterprise markets have a different usage scenario where the large write buffer that resolved the issue for consumers could still an issue at the enterprise level (ie the SSD would need an even larger DRAM buffer).
Bullwinkle J Moose - Monday, November 5, 2012 - link
Did I miss something?With 1:1 mapping, this this sounds like the Worlds first truly O.S. agnostic controller
Does it require an O.S. with Trim or a partition offset for XP use, or did Intel just make the Worlds first universal SSD?
The 320 may have handled partition offsets internally but still required Trim for best performance
Please correct me if I'm wrong
jwilliams4200 - Tuesday, November 6, 2012 - link
You're wrong. You have misunderstood how the indirection table works.iwod - Monday, November 5, 2012 - link
The only new, and truly innovation in this controller is the actually the software side of thing. 1:1 mapping and basically super fast storage table for updating, deleting by ECC RAM.Couldn't 70 - 90% of this performance gain be implemented with other controller if they had large enough ECC DRAM?
Please correct me if I'm wrong
And what are the variation of Random I/O in other Enterprise Class SSD like Fusion IO?
MrSpadge - Tuesday, November 6, 2012 - link
To me it sounds like this change requires an entirely different controller design, or at least a checking & rethinking of major parts. Intel surely didn't tell us everything that changed, just the most important result of the changes.nathanddrews - Tuesday, November 6, 2012 - link
C'mon, man! You're killing me! XDPandaschnitzel - Thursday, November 8, 2012 - link
If the encryption is based on SED I would like to see another comparison. Windows 8 and the Bitlocker allow drive encryption like Truecrypt. The difference is that the Bitlocker can use the TPM of the motherboard. With an SSD based on the SED standard the TPM could directly use the hardware encryption of the SSD bypassing the CPU. This should result in significant performance gains compared to standard encryption. Here the penalty of encryption would be interesting. The new Bitlocker also allows to only encrypt the used sectors. This is very important for SSDs because previously the whole drive was encrpted which had negative impact on SSDs - at least without massive overprovisioning.jamescox - Friday, March 1, 2013 - link
I don't care about battery life too much. I mostly have my laptop plugged in. I don't think I would have issues with write endurance of a lower priced drive, but I would pay significantly more for higher reliability (no crashes, hangs, or data corruption). I think the consistent response time would be a big plus, especially if the drive was being used for swap space.
Would something like the 100 GB version take too much power? It looks like the max power consumption is only around 3 W for the 100GB version, compared to 0.6 to 0.9 for more other consumer drives. It seems to be selling for more than MSRP ($235) right now though. About $270 is the lowest I have seen for 100 GB version.