You have no items in your shopping cart.
What exactly is SSD cache and how does it work? is a question we are often asked.
The objective of enabling SSD cache is to increase the performance of random access to a small portion of data that is frequently accessed in the storage space. For example, both large sequential read or write operations (e.g., HD video streaming) and entirely random data reading patterns lack re-reading patterns, and thus will not benefit significantly from SSD caching. For general applications, it is recommended to enable the Skip sequential I/O option, so that sequential I/O can still pass through to the drives of the storage space.
SSD cache can improve the performance of random access by storing frequently accessed data on the SSD cache. SSD cache can be mounted on a volume or iSCSI LUN (Block-Level). There are two types of SSD cache:
- Read-only cache can consist of 1 to 12 SSDs which can be mounted respectively in a basic or RAID 0 configuration to improve the random read performance of the storage space on which it is mounted.
Note: SSD read-only cache stores copies of data from the volume; thus, no data loss will occur even if the read-only cache has crashed.
- Read-write cache can be mounted in a RAID 1 / RAID 5 / RAID 6 configuration, depending on the number of SSDs (up to 12 SSDs), to improve the random read and write performance of the storage space on which it is mounted.
Both types of SSD cache implement an LRU (Least Recently Used) algorithm to swap the data in the cache.
The Need for SSD Cache
High information system productivity demands low latency, and the requirements for I/O latency are most stringent when running mission-critical business applications. For any IT deployment, the greatest challenge is achieving a balance between low latency, high efficiency, and optimized system-utilization rate.
Degree of I/O latency in a storage system is determined by two factors: I/O workload pattern and storage media capabilities. Most of the business applications (e.g. OLTP databases or email services) involve random IOPS, which access data stored non-contiguously on system disks. As required bits of data are not located within physical proximity to one another, the system produces numerous seeking processes, thereby increasing I/O latency.
Traditionally, to overcome high I/O latency because of random IOPS workloads, a larger than the necessary number of disks may be deployed to increase the number of heads and reduce the chance of two consecutive reads on the same disk, thereby boosting access performance. However, there are several drawbacks to over deployment, including lower efficiency and overall system-utilization. More specifically, increasing the number of disks necessarily involves an increasing number of enclosures, amount of space required, the power consumed for operating and cooling, and ultimately leads to higher maintenance costs. Moreover, system-utilization rate may diminish as unnecessary capacity is added to reach the requisite amount of heads.
SSD Cache on Synology NAS
By enabling the superior random-access performance of Solid State Drives (SSD), Synology SSD Cache technology provides a solution to enterprise challenges which boosts read and write speeds without adding to the overall number of disks.
In terms of read operations, statistically, only a small portion of data in any given storage system is accessed frequently. System performance can, therefore, be improved by storing frequently accessed data on the SSD Cache to create a read buffer while maintaining total cost of ownership at a minimum.
Small, random write operations are also common in enterprise workloads. With excellent random access, SSD read-write cache can accelerate the performance of volumes and iSCSI LUNs, reduce the latency of the random write operations, and greatly reduce any impact on the performance of other data transfers.
Synology’s read-write and read-only SSD Cache technology is available on all XS+, XS, and selected Plus series products. By attaching each SSD cache to a single storage volume or iSCSI LUN (block-level), Synology SSD Cache can create a read and write buffer which greatly enhances the system performance of your Synology NAS.
Data Operations and Hot Data
Typically, on receiving a read request, servers first check if relevant data is in the system memory cache known as RAM, which stores the most recently accessed information. If the requested data is absent, read processes on disks will be triggered. As the RAM size is severely limited compared to the working data-set, most retrieval requests necessarily result in reading from disks and therefore increased latency.
In most applications, there are observable patterns in data retrieval and workload due to the I/O characteristics of the application’s behavior. For instance, in an OLTP database workload, some tables in the database are more frequently read than others. The most frequently accessed data is termed “hot data.” Of this “hot data” subset, the most recent data has an even higher probability of being frequently accessed. In the majority of critical, business workloads, the most recently accessed data is also the most relevant and therefore in need of timely retrieval.
NAS Read cache can be created with only one SSD. If larger read cache or read-write is required, two SSDs of the same model and make must be installed on the server. For the read-only cache, the SSDs are configured using RAID 0. For the read-write cache, SSDs are configured using RAID 1 to ensure data integrity in case one SSD fails. Once installed, each SSD Cache can be attached to any one volume or iSCSI LUN (block level) in the system. If a volume has SSD Cache enabled, the iSCSI LUN (file level) on the volume will also benefit from the increased performance.
SSD Cache on QNAP NAS
QNAP solid-state drive (SSD) cache technology is based on disk I/O reading caches. When the applications of the Turbo NAS access the hard drive(s), the data will be stored in the SSD. When the same data are accessed by the applications again, they will be read from the SSD cache instead of the hard drive(s). The commonly accessed data are stored in the SSD cache. The hard drive(s) will only be accessed when the data could not be found from the SSD.
Traditional data access method When the CPU needs to process data, it will follow the steps below: 1. Check CPU Cache 2. If not found in CPU Cache, check RAM 3. If not found in RAM, get from hard drives, and copy to RAM.
SSD cache data access method When the CPU needs to process data, it will follow the steps below: 1. Check CPU Cache 2. If not found in CPU Cache, check RAM 3. If not found in RAM, check SSD Cache. 4. If not found in SSD Cache, get from hard drives, and copy to SSD Cache.
Since the SSD supports high-speed data transfer and has no mechanical properties or moving parts, if the applications require more random read requests, the SSD cache can significantly improve access speeds.
QNAP SSD cache technology provides two algorithms:
(1) LRU (Default): Higher HIT rate, but requires more CPU resources. When the cache is full, LRU discards the least-used items first. As the system needs to track the cached data to ensure the algorithm always discards the least recently used data, it requires more CPU resources but provides a higher Hit rate.
(2) FIFO: Requires less CPU resources, but lower HIT rate. When the cache is full, FIFO discards the oldest data in cache. This reduces the HIT rate but does not require too much CPU resources.
Applications and benefits
- Database: MySQL, MS SQL Server, etc.
- Virtual machine: VMware, Hyper-V, XenServer, etc.
The SSD transmission speed is high, but the unit cost is higher than that of the hard disk drive (HDD). Using the SSD cache technology, we can significantly improve the IOPS and I/O speed, and maintain a relatively low unit cost to satisfy the demand for utilized space and transmission speed.
QNAP designed Qtier (QNAP auto-tiering) Technology
One of the most crucial reasons to use SSD Cache, SSD’s and a combination of other drives is to try and attain the optimum level of efficiency by implementing data tiering. QNAP has designed a feature called Qtier. Qtier technology drives Auto Tiering by constantly optimizing data across storage tiers.
As the price of solid-state drives (SSD) becomes more favorable, the demand for high-performance storage increases. With the advantages of high IOPS and low response times, SSDs improve the performance of data center applications that require fast and consistent performance. However, as the cost per gigabyte of an SSD is still higher than conventional hard drives (HDD), it is still more economical to use HDDs to store large amounts of cold (rarely-accessed) data.
QNAP designed Qtier (QNAP auto-tiering) Technology, incorporates the speed of SSDs and capacity of HDDs on one NAS system. Qtier will automatically migrate data based on how frequently data has been accessed using an industry-leading 12Gb/s SAS controller. It improves overall system performance under the most complex, mixed workloads and server applications while providing high-capacity storage for cold data.
Intelligent Data Management
Qtier, automatically moves the most active data to high-performance drives, and the less active data is migrated to high-capacity drives. This alleviates the burden on administrators by supporting tasks of performance pre-estimation and data relocation.
Enable Auto-tiering when creating a Storage Pool
As mentioned previously, to achieve the required level of performance and cost benefits, data is categorized based on access frequency. Service levels such as response time or runtime must be measured and evaluated in advance to decide which data must be stored at a given time in a certain tier.
Selecting the best methodology of implementing the NAS with the required storage will need some thought, as NAS servers are now embracing enterprise level features at a fraction of the cost, and with the advent of Artificial Intelligence just around the corner you can be sure Artificial Intelligence (AI) will form an integral part of future NAS designs.