Because of their small form factor, mSATA SSD are mainly found in ultra portable devices such as, ultrabooks, tablets or ultra compact PCs. Lower sizes units, 30GB or 60GB, are the prime candidate for the Intel Rapid Storage Technology feature as long as your motherboard supports it . Although, there are no benefits of using an SSD as a cache for another SSD, a SSD/HDD combo is your optimal setup for that purpose. With the Intel 525 mSATA series, there are plenty of capacity options to choose from. Available sizes are: 30GB, 60GB, 120GB, 180GB and 240GB. They are ridiculously tiny and I am curious to know what is the trade-off, size vs. performance vs. price?
Intel Performance sheet
Full comparaison of all the models is available from Intel website
Over Provisioning = (PHY capacity(GB) – User Capacity(GB)) / User capacity (GB).
I went through most of the popular benchmark tools, AS SSD, CrystalDiskMark, ATTO, IoMeter, Anvil’s Storage Utility RC6 and PCMark Vantage but also performance monitoring tools such as DiskMon, hIOmon, primarily to validate the tests. Instead of posting chart after chart, I believe, as a consumer, what important is how the product fits the needs and not chasing after uber high numbers which are only attainable during benchmarking. I narrowed down to, Anvil’s Storage Utility and PC Mark Vantage Licensed Pro version.
Drive conditioning: The SSDs were prepped with Windows 7 (from an image), filled to about 50% of the storage capacity and benchmarks are run from the tested unit acting as the OS drive. Data drive content is, Windows 7 x64 OS, benchmark utilities and four WoW folders of 22GB each, for a total of 109GB for the largest Intel 525 unit. For smaller capacity, the number of WoW folder was reduced. Lastly, for the 30GB, was only filled with data.
Steady state: This state occurred overtime when the drive went through enough writes cycles, or to be more specific program/erase (P/E) cycles, that write performances are consistent or stable. It may take few weeks before the SSD reaches it, depending on the computing usage but it can be accelerated using IoMeter. In summary, Steady State is: Written Data = User capacity x 2, at least.
My test system specifications
Intel provided the mSATA to SATA adapter
Performance is defined by two criteria: throughput and Response Time (RT), same as access time or latency. Low access time keeps the queue depth low, better application response time.
While RT is pretty straightforward, throughput isn’t. Storage subsystem mainly performs two functions, read/write and the access type is either random or sequential. Without any particular order, throughput is defined by random read, seq read, random write and seq write. In additions, other variables such as queue depth, file size (4K, 32K, up to 8192K) and the SandForce controller adds another variable, compressible data, will affect the final result.
Keep in mind that unlike synthetic benchmarks which perform only one specific operation at the time for a predetermined duration, seq read, then seq write then random read, and so on and so forth, real world usage paints a different picture. All four accesses type can occur at any time, and different transfer rate and at different (I/O access) percentages. For instance, a storage subsystem on a streaming server would mostly see high seq read I/O, large block reads, very little to none write. Looking at a database server without blob data type, we would probably see 75% random read, 20% random write, 5% random and seq write. I could either guesstimate the different ratios or figure a method to define a more accurate I/O usage baseline.
One way to determine an I/O baseline is to capture disk operations while the user operates the computer. I used Diskmon from sysinternals and imported the data into an excel spreadsheet found on Emphase website . The end result will provide %read, %write, %random and %sequential.
As for the usage pattern, I will rely on the PC Vantage, which I think, is the closest to real world computing usage. Started Diskmon, ran the 45(ish) minutes benchmark, here is the output once imported in the Excel template.
DiskMon from SysInternals & data usage gathered during the PCMV test.
From the numbers, I rated the I/O usage by activity as follow: Random Read > Random Write > Seq Read > Seq Write
Let’s factor in data compression, since the SandForce controller shrinks data to improve write I/O. The highest write performance is obtained with compressible data, “0-Fill”. The 1GB file Bench_test.fileR.tst generated for the test was zipped up using Windows built in tool, it went from 1GB to 0.99MB, let’s just say 1MB. As a result the compression ratio is 1:1000 or 0.1% uncompressible data. I cannot help but ask, how often does a process, outside of benchmarking, deal with 1000:1 compression ratio data type?
I want to know the compression ratio of all the 109GB worth of data stored on the SSD. The computer rebooted from another SSD as the primary drive, turning the tested Intel 240GB into a secondary drive. I simply used the native Windows feature allowing data to be compressed to save storage space. Once the option was enabled, it took few hours to compress the entire disk.
The process freed up ~7GB of storage space, which translates to 94% of uncompressible data or a compression ratio of 1:1.06. Estimating the compression ratio is a necessary step when dealing with SandForce controller. Though, with the data set I am working with, I will just skip the “compressible” testing, since it is irrelevant.
To cover Queue Depth, I used hIOmon during the PC Vantage full run. There is a trial version for a week which is enough time to build the baseline.
From this chart, benchmark performance over 16 QD do not carry the same weight as those in a lower QD length.
If a disk system is seeing high QD (anything above 4 is considered high QD for more than ~10 seconds), it means that there are more I/O requests than it can handle. One good example of that behavior is when a system is running low on memory (RAM) and start using the disk as virtual memory. The disk is then under heavy load for having to respond, not only to application I/O requests but also virtual memory requests.
Performance:
Read performance is pretty even across the 525 Series units at QD1. As the workload increased, higher QD, SSD with the most interleaves start separating themselves from the pack.
There are very little difference between the 240GB with 8 channels and 2 interleaves than the 180GB operating with 6 channels and 2 interleaves. It is not surprising that the lowest performance is the 30GB with 4 channels and 1 interleave.
Write performance follows the same principles but, by their nature, writes are more task intensive. Thus, as more stress is put on the disk system, devices with the highest channel/interleave would come out with the most favorable scores.
The intel mSATA 525 do not lose too much ground versus the 2.5” Samsung 840 based on the performance. I admit it is apples to oranges, but considering the size difference it was a good surprise. It shows that Intel brings desktop performance to those ultrabooks or any devices mSATA compatible without speed penalty as long as we set aside pricing. But I guess that is the trade-off for mobility.
Bigger capacity does not automatically translate to higher performance outside of large block (4MB) sequential read/write accesses. Which are not the typical I/O a desktop usage storage system would see on a daily basis. The relevant measures (random I/O) show a 40% performance drop between the 240GB and the 30GB model. At the same time, per PCMV, the difference is only about 20% in favor of the 240GB unit.
The difference between PCMV and Anvil’s, and other synthetic benchmarking tools for that matter, is that PCMV does not try to ramp up the workload and see how high the numbers can go. PCMV attempts to mimic desktop applications or gaming behavior and comes up with a score. What was captured through my data gathering during the PCMV run, was that most I/O activities are under 4QD, average file size transfer is 128K and more reads I/O, 65% vs 35% writes I/O.
The last chart above shows the difference between testing an SSD in a steady state with uncompressible data vs Out-of-box state and 100% compressible data. The discrepancies are especially more accentuated with SandForce controller based SSDs because of the DuraWrite technology.
Every new SSD owner should benchmark their products out the box. As long as the results are not too far off from the vendor specifications, the PC configuration and product are in order. Do not “over benchmark” because a high volume of written data in a short period of time could cripple an SSD until the garbage collection can recover it back to the steady state.
Boot and Shutdown times are available within Windows Event Viewer.
Right Click “My computer” > Manage > Event Viewer > Application and Service Logs > Microsoft > Windows > Diagnostics-Performance > Operational. The Boot Time is from the event ID 100 while Event ID 200 referees to Shutdown Time.
I am aware of BootRacer, I did look into it but decided to pass on it, for two reasons.
I am questioning the value of providing those benchmarks and here is why. Firstly, there is not enough difference to learn anything from them since the boot process is primarily read access under 1QD . Secondly, those numbers are only pertinent to my setup. This because the boot time will vary depending on, existing hardware and applications loaded at start up Unless there are enough reader requests to keep running boot/shutdown time benchmarks, I would probably skip them in the future.
Software:
Although not included in the packaging, Intel provides two applications to manage the SSDs on their website, Intel Solid-State Drive Toolbox and Intel Data Migration Software.
In case the user does not feel like re-installing the Operating System from scratch, Data Migration Software makes the process painless. It is pretty straight forward, start the application, select the source, then the destination on the next screen and confirm. The computer will reboot, migrate the drive content from one to the other, and reboot again. At that point, make a stop in the BIOS to select the new drive to boot from. I would advise you to keep the previous drive for a little while just in case.
The other software suite is a one stop shop to manage your Intel SSD. The first stop would be the “System Tuner”, where unwanted Windows services are turned off. There is no need to go through Windows services and manually disable them. “Intel SSD Optimizer” forces TRIM functionality to ease the Garbage Collection process. I like the ability to schedule it as a maintenance task.
Firmware update goes through upgrading the Intel SSD toolbox. Two diagnostic modes are available to troubleshoot the drive. I doubt it can fix anything though, Intel support would probably ask you to run it so they can justify (or not) issuing the RMA.
Finally, the “Secure Erase” functionality is to wipe all user data from the drive and mark all the space as available. It is often stated that this process will put it back to its out-of-the-box state, minus, the wear. This is a temporary situation though, as data is written to the drive, garbage collection and write amplification will bring it back to the steady state explained above.
DuraWrite technology from SandForce has yet to win me over. I can see it performing as it was designed when dealing with highly compressible data such as databases (ref: LSI white paper) but I fail to picture a usage pattern that would fully take advantage of the DuraWrite in a desktop environment. I may be missing something about it but for now I would rather stick with a fixed amount of DDR2 cache on the host. RAM cache is a proven technology unlike DuraWrite which relies on unpredictable data compression ratio.
If I were in the market for a mSATA, the 525 series would definitely be on my list, just because of the 5 years warranty and the brand, however I would keep shopping around because of the $1.24 per GB price tag for the 240GB. I do not want to generalize by stating “most users this” or “most users that”, but speaking for myself, I would go for reliability first and foremost within my budget. My criteria to pick a SSD would be as follow:
Capacity Needed > Brand reputation > Price > Warranty > Relevant benchmark scores.
How about your criterias for choosing a SSD? Share your thoughts in the comments section below.