How to solve the problem of SSD write amplification

2019-11-04 14:29 benfenge
139

The emergence of SSD is a milestone in the storage industry, especially in the ascendant of software-defined storage. If you don't add a few SSDs to the storage, you will be embarrassed to say hello to your friends.


The Gartner report shows that the overall scale of the all-flash array market will continue to expand at an average annual compound growth rate of 37%. In 2020, the proportion of data centers that use all-flash array technology for primary data storage will reach 25%. Received more and more users' favor.


However, there is no pure gold and no one is perfect. Such an awesome SSD lacks because of its own operating mechanism, but it also has a congenital shortcoming, that is, the problem of write amplification. In 2008, Intel and Silicon Systems (acquired by Western Digital in 2009) first proposed "Write Amplification" (WA) and used this term in public manuscripts.


To understand write amplification, we must start with the SSD write mechanism.

The design of SSD is completely different from that of mechanical disk, and it can even be considered that SSD is a small PC with all kinds of internal organs. First of all, because there is no write head, the seek process of the head between tracks is omitted when reading and writing data, so it can also provide higher IOPS performance. Here is a sentence. Many people think that because of the lack of mechanical heads, moving around will reduce power usage, but this is not absolute. For example, some NVMe-type SSDs consume no less power than mechanical hard drives. But when the SSD is fully written, the original data needs to be erased when writing data (Note: Generally, the erase unit is larger than the minimum write unit, such as the common write unit, that is, the page size is 4K, but the common erasure unit, that is, the block size is 512K or higher), this adds an extra step, so the overall speed is down. Just like a piece of paper, frequently erased with an eraser, the paper will spontaneously burn and become thinner, and write amplification increases the amount of data written by the SSD as a whole, which naturally reduces the service life of the SSD.


So what exactly is write magnification?


Let us take an example to illustrate. Suppose you want to write a 4KB data, but there is no clean space in a block, but invalid data can be erased, so the master moves all the data to the cache or OP Space, and then erase the block, plus this 4KB of new data is written back, this operation causes write amplification, that is, originally 4K data is written, but the entire block (for example, 512KB) is written. That is, 128 times magnification (of course, the real-world SSD internal master control is not so "stupid", but the write magnification is real).


At the same time, the original operation that only needed to write 4KB becomes read (512KB), erase (512KB), and rewrite (512KB), which greatly increases the delay and naturally slows down the writing speed. When such erasure is too much, it also affects the service life of the SSD.


Therefore, an effective method is to modify the OP reserved space and clean up the useless data in the solid state drive in time, leaving more blank space to reduce redundant erasing and writing, thereby reducing the solid state write amplification value , Improve the solid-state life, but the biggest problem with this method is the waste of SSD.

19f20004162705276be8.jpeg

In addition, as we said earlier, SSD can be regarded as a mini PC. When designing SSD, write amplification is also one of the main issues to be considered in the design of the main control chip.


Since the development of SSD, it has developed many specific functions to solve the problem of write amplification, such as alignment write, additional write, Trim command, garbage collection, wear leveling and other functional mechanisms to improve its service life.


In the current trend of popular software-defined concepts, it is through software-defined methods to control their own access behaviors on the host side through software to make them more in line with the characteristics of SSDs, thereby solving the "irrationality" of the SSD's own hardware. . Many old drivers like to call this method Host Based (don't translate it, the translation is unprofessional), and the traditional way of using SSD is to use SSD as a mechanical hard drive, that is, the Device Based method.


Common random writes (Random writes) will write many non-contiguous LBA (Logical Block Address), which will greatly increase the write amplification.


Therefore, when using SSD as a cache, the software-defined method is used to change the random IO sent to the SSD into sequential IO, so that the data written to the SSD is additional writing, rather than in-situ writing, which reduces the SSD For the overwritten data, the Trim command and the intelligent garbage collection mechanism are used to ensure the timely and efficient recovery of user data space.


In the test, we often encounter a lot of such situations: in the case of the same hardware configuration, the performance of different software will be very different. The main reason for this is the problem of writing magnification, and writing too much is not as accurate as writing! Therefore, a good storage software must require a good algorithm, and a good algorithm must reduce write amplification as much as possible. There are some other methods to reduce the problem of write amplification, such as static and dynamic data separation, etc., due to limited space, I will not repeat them here.


In the foreseeable future, those who get the write amplifier will get the SSD, and those who get the SSD will get the world.