Samsung’s press release recommended the PRO for “serious users” while the non-PRO would be a good fit for “general computing purposes”, so I wanted to take both for a spin (pun intended) and compare them to see which one could be best for you.
TLC storage is nothing new. Most of the USB stick drives, SD cards, media players and GPSs are TLC NAND based. Although OCZ was the first one to make an announcement about TLC SSD back in Q4 of 2011 and Plextor targets Q2 of 2013 for its line of TLC. Samsung is the only one and has been for almost 10 months now to offer a TLC based SSD. May be it has something to do with the fact that Samsung controls every aspect of the SSD manufacturing process, giving the giant a clear advantage over the competitors.
It is evident that MLC is a better technology than TLC, but by how much and at what cost is what I am eager to find out in this Samsung 840 PRO 256GB vs. Samsung 840 250GB review.
In case the user does not feel like reinstalling the Operating System from scratch, Data Migration is pretty straight forward tool. The utility will always detect the OS drive as the source, which is a good failsafe. It only works if at least one SSD is a Samsung. As far as I know, it is Windows only.
The Magician 4.0, also Windows only, is well designed and all vital information is readily available. BIOS update and overprovisioning can be set up with one click of the mouse.
OS optimization feature offers three pre-set options, Performance, Capacity and Reliability. The “Advanced” tab allows power users to pick and choose their own preferences. As a personal rule, when in doubt, I always go for “Reliability” over anything else.
Finally, the “Secure Erase” functionality is to wipe all user data from the drive and mark all the space as available. It is often stated that this process will put it back to its out-of-the-box state, minus, the wear. This is a temporary situation though, as data is written to the drive, garbage collection and write amplification will bring it back to the steady state.
The major difference between the 840 and the 840 Pro is the use of TLC NAND in the non-pro version vs. MLC for the other. Samsung is currently the only vendor offering a TLC SSD version.
The TLC NAND, Triple Level Cell, stores three bits per cell, while the pro, the MLC, Multiple Level Cell, version has two bits per cell. Enterprise level SSD are built with SLC, (Single Level Cell), one bit per cell.
Trade off MLC to TLC, lower cost:
Higher bit density lowers NAND manufacturing costs and raw material, the end result being lower consumer product price.
Trade off, lower endurance:
While reads I/O are unlimited on NAND, writes are not. Write endurance is the amount of P/E (Program/Erase) a cell can withstand before it gets “retired” because the failing rate is too high. Once every cell is retired, the drive has reached its write endurance.
The number of P/E cycle is based on the silicon size and the bit density. Smaller lithography size and higher bit per cell will lower the write endurance.
The (educated) estimate for the P/E cycles is summarized in the chart above.
To translate into real world numbers, the amount of host write = SSD total Capacity x P/E cycles. As an example, for the 250 840, the drive would reach its write limit around; 250,000 GB or 250TB of NAND writes.
In the best case scenario, like out of the box, NAND write equals to host write. For instance, 4KB file is written to the host, 4KB is written to the NAND, 1:1 write ratio. As the storage gets filled with files and applications, for any new data sent to the host from the OS, the SSD will need to reorganize the existing data to make room for the new file. Worst case scenario, 4KB file (write host) could result in a 256KB write (NAND write), resulting in 1:64 write ratio. This phenomenon is called “Write Amplification”. Small file size generates a higher WA than large files. The goal for the controller and the firmware is to keep the WA as close as possible to a 1:1 ratio.
In summary: Write Amplification = NAND write / Host write.
Life expectancy (estimated).
Assuming an average of 10 GB write to host per day and a WA estimated of 3.5, the NAND life expectancy is displayed below.
Formula: Year(s) = ((User Capacity * P/E cycles) / (Write to host per Day * WA)) / 365
10 GB per day write to host seems to be a realistic average. hIOmon logged ~13GB of write for a Saturday from 7:15 AM to 12 PM on my primary computer. I was probably in front of the workstation for 7 hours total, mostly internet browsing, YouTube, espn3 streaming and one 30 minutes session of SC2, (“All in” mission). The number would be lower for me during a work day. For my usage pattern, 10GB/day would be a high workload.
If durability is a concern, there are two things the user could do to improve SSD endurance:
The numbers show that the Samsung MLC version has a higher durability, and this is not a surprise. But looking at the figures from a different perspective, in term of years vs. Program/Erase factor, it should ease some concerns about TLC NAND durability, unless there is an expectation that the product will last more than 20 years. Kidding aside, actually, the NAND chips would be the most reliable component in an SSD compared to the controller, PCB or (buggy) firmware.
Trade off, lower performance:
Higher bit density increases error rates. Error rates increase retry attempts. Retry attempts increase latency and finally, higher latency translates into lower performance. That was the argument coming from SLC to MLC, as it is now, going from MLC to TLC.
I can see why the 840 would need the fastest and as much cores as the controller can support, since there are more tasks to perform with 3 bits per cell.
I have no doubts that the 840 PRO would put up bigger numbers than the 840, but by how much?
I went through most of the popular benchmark tools, AS SSD, CrystalDiskMark, ATTO, IoMeter, Anvil’s Storage Utility RC6 and PCMark Vantage. But I also used performance monitoring tools such as DiskMon and hIOmon, primarily to validate the tests. Instead of posting chart after chart, I believe, as a consumer, what is important is how the product fits the needs and not chasing after uber high numbers which are only attainable during benchmarking. I narrowed it down to Anvil’s Storage Utility and PC Mark Vantage Licensed Pro version.
Drive conditioning: The SSDs were prepped with Windows 7 (from an image), filled to about 50% of the storage capacity and benchmarks were run from the tested unit acting as the OS drive. Data drive content is, Windows 7 x64 OS, benchmark utilities and four WoW folders of 22GB each, for a total of 109GB for the largest Intel 525 unit. For smaller capacity, the number of WoW folders was reduced. Lastly, for the 30GB, was only filled with data.
Steady state: This state occurred overtime when the drive went through enough write cycles, or to be more specific program/erase (P/E) cycles, that write performances were consistent or stable. It may take a few weeks before the SSD reaches it, depending on the computing usage but it can be accelerated using IoMeter.
In summary, Steady State is: Written Data = User capacity x 2, at least.
What numbers are relevant in a real world usage?
Keep in mind that unlike synthetic benchmarks which perform only one specific operation at the time for a predetermined duration, seq read, then seq write then random read, and so on and so forth, real world usage paints a different picture. All four access types can occur at any time, and different transfer rates and different (I/O access) percentages. For instance, a storage subsystem on a streaming server would mostly see high seq read I/O, large block reads, with very little to none write. Looking at a database server without blob data type, we would probably see 75% random read, 20% random write and 5% random and seq write. I could either guesstimate the different ratios or figure a method to define a more accurate I/O usage baseline.
I/O Baseline.
While it is entertaining to run a bunch of benchmarking tools, expecting huge numbers, the purpose of testing the units is to get a good look at how they perform under realistic desktop usage pattern. That is why I picked PCMark Vantage suite as my usage pattern. By capturing and analyzing I/O during the PCVM run, disk operations are breakdown to percentage read vs. write, random vs. sequential, queue depth and average file transfer size.
With that information, benchmarking makes more sense since all the numbers do not carry the same importance thus some results are more valuable than others.
In summary, I/O pattern defines what I need from the device vs. what can the device do overall.
The I/O baseline process was explained in the Intel 525 mSATA review.
From the numbers, I rated the I/O usage by activity as follow: Random Read > Random Write > Seq Read > Seq Write and average file size is 128K.
To cover Queue Depth, I used hIOmon during the PC Vantage full run. There is a trial version for a week which is enough time to build the baseline. Based on the chart below, it is obvious that a benchmark score from a QD 16 (or more) does not carry the same weight as a score from a QD 1.
Samsung 840 Pro 256GB ASU score
Samsung 840 250GB ASU score
READ 4K – QD1 – QD4 – QD16 (Higher is better)
READ 32K – 128K – SEQ 4MB (Higher is better)
While it was a given that the PRO would be faster than the TLC version, I did not anticipate a ~20% read performance difference between them especially at low QD (4 and below).
I was expecting a bigger difference in low QD in favor of the 840 PRO. As more stress is put on the drives, the 840 quickly topped off at 242MBs while the PRO version still had some gears under the hood. Although, keep in mind, in a desktop environment it is very unlikely that the SSD drive system would see anything higher than QD4 for an extended period of time.
PCMark Vantage – HHD- Productivity – Gaming (Higher is better)
PCVM scores even things out. The heaviest I/O bound benchmark, PCVM HDD, only displays a 6% performance increase between the PRO and the TLC version.
Coming in, I knew that the 840 PRO is a better product than the 840. What I was mostly interested in was, would the performance gap justify the 39.11% ($70) increase in price. If performance and durability were the only concern, then we would all be running SLC based SSD. But we are not, and it is because of the cost factor.
The final chart below summarizes cost vs. benchmark scores, life expectancy in years, storage capacity and warranty. In other words, this is the “Bang For The Buck” chart.
From a storage capacity and performance perspective, the 840 TLC is more cost effective than the 840 PRO, by about 27%.
In terms of longevity, lifespan expectancy and warranty, the 840 PRO appears to be a better investment, only if the product is meant to last for at least 27 years. In terms of warranty, 5 years, sure, does look more reassuring than 3 years. Although, for the 840 unit it would take 3.6 years to even the warranty per year cost out compared to the 840 PRO.
History repeats itself; MLC faced the same criticisms compared to the SLC when it was introduced. Over the years, MLC overcame the doubts by improving the write I/O tasks to increase performance and durability. The industry did so by implementing, wear leveling, overprovisioning, improved ECC and TRIM support via the OS. At the time, the MLC technology was looked down as the TLC is now and for the same reasons.
Synthetic benchmark does show bigger numbers in favor of the PRO while applications trace testing evens the playfield. It is hard to argue about the durability between the 840 PRO and the 840. Numbers do not lie, one would last three times longer than the other. But when both SSD lifetimes are presented in year instead of P/E cycles, it is not as “bad” as it looks.
In summary, they are both good products, potential buyers cannot go wrong with either one. Picking one over the other would depend on the following scenarios:
How would you decide between the Samsung 840 PRO and the TLC version? Share with us your thoughts in the comment section below!