Last time we looked at the stages of the struggle for dominance in the «disk» subsystem of solid state drives and traditional hard drives. In the same place, we briefly highlighted the nuances of the resource of solid-state drives. Today we will try to consider the practical reliability of hard drives. It would seem a bit late, but let’s not forget that in the next at least 10-20 (and most likely much more — we’ll talk about this later) years, this type of product will be guaranteed to be available on the market in the mass segment due to the presence of rather big niches where the speed gains of SSDs are redundant and the stored data is relatively cold. Yes, and promising volumes of hard drives in the near future, solid-state drives at reasonable prices will not catch up.
One can, of course, theorize about this for a long time. You can recall clearly unsuccessful decisions of manufacturers, for example, with too frequent head parking or especially loud products, but the main criterion in the matter of ascertaining, it seems to me, should be statistically applied, especially against the background of the fact that, unlike the situation with SSD, to find approved modern endurance standards for classic propellers are unlikely to succeed.
Recall that outwardly and physically modern hard drives are mainly 2.5″ and 3.5″ in form factor. Execution is both internal and external.
Inside mobile external drives are ordinary 2.5-inch hard drives, which in most cases can be obtained and connected directly to a laptop or desktop if they have the right interfaces. And vice versa — you can put existing disks of suitable thickness into such a pocket, making them mobile and externally connected.
In the case of 2.5 inches, there is a thickness of 12.5, 9.5, 7 and even 5 mm. Electrically, they will be compatible, but the physical dimensions, as we understand, will differ. It looks like this:
Contact groups are the same, but the thickness is different. 2.5-inch hard drives are more commonly used in portable technology. The thinner the laptop, the more carefully you need to look at how thick the drive is provided by the manufacturer. It is not a problem to put thin disks in places for thick ones — they are often sold with thickeners in the form of a plastic frame so that they do not hang out in the seats for thicker colleagues. In the absence of a frame, you can lay them with anything — even with cardboard in the corners. But pushing thicker ones into places for thin ones will not work — be careful!
There were 1.8- and 1.3- and even 1-inch Microdrives in the Compact flash II format — in general, left-handed products are practically left-handed. But this is already history, because. in the ultra-compact segment, everyone was dispersed by the usual flash.
Interfaces today in everyday life are SATA and still IDE, in professional use there is also SAS. We will not delve into the issues of parallelism and sequence, as well as the concept of tires within the framework of this material.
IDE, aka ATA, they are also abbreviations for Integrated Drive Electronics and Advanced Technology Attachment, is growing with its roots from the 90s and is already becoming a thing of the past. New mass motherboards with it have not been made for about 10 years, probably, but it is still full in the available park. It passes 133 megabytes per second and looks like a connector exactly like this on the drive. Far left is power, right is data. And accordingly on the motherboard. It is connected with a flat cable, usually gray or black. Like this.
We considered this purely for historical reference.
The mainstream today is SATA. Typical for 2.5- and 3.5-inch solutions looks like this:
On the right is the power contact group, on the left is data. View: drives upside down. Compatible with each other. Connected as shown in the picture.
We reviewed revisions and throughputs last time and we will not stop here. We only note that there are varieties of the eSATA type for external devices and slimline SATA for compact internal ones. And yes — SATA is sharpened for hot swapping, i.e. on the go without reboot. Unless in the device manager you may need to click the «update» button in the case of Windows.
There are adapters for power and the ability to connect IDE to SATA and vice versa, but we’re not talking about that.
SAS stands for Serial Attached SCSI and is used primarily in professional environments, is backwards compatible with SATA and has a throughput of 12-24 Gbps. It looks similar to SATA, but the connector is different. Turnovers are large — up to 15000, error correction, multipath — like «in the best houses in Paris and London», but it’s expensive and you can’t stick it in everyday life. It also heats up so that some models require kilogram radiators.
But back to the question.
Today’s mainstream hard drives are at the end of traditional technology. The density of data per working plate can only be increased by fundamentally new technologies, and the thickness of the “pancake” itself can be reduced to further increase their number in an assembly. In addition, in theory, it is certainly possible to raise the spindle speed to thousands of heights, but the rest of the race participants will have to catch up with this and at the same time not run into the air, which in a number of lines is already changing to helium. Increasing the number of head blocks, remembering the past, is also not a weak engineering task, given all of the above. And no one is in a hurry to land post-SCSI in the form of SAS in SOHO, although it is expensive and, in fact, almost morally obsolete. But this is not the direction the industry seems to be heading.
In short, the set of problems in the development of hard drives is set out in a trio of mutually exclusive paragraphs, which is quite scientifically called a trilemma. The bottom line is something like this — to increase the recording density, you need to reduce the recorded areas on the media and, accordingly, the dimensions of the heads, together with the materials from which everything is made, but at the same time, both the magnetic properties of such miniature areas and the capabilities of a small head, including their stable legible reading. To solve the latter, it is necessary to increase the former, and the general task initially requires just the opposite. Those. ring.
But R&D did not stand still and their results were concentrated around very specific and feasible proposals to achieve the goal of increasing the volume of hard drives. Some of them are still in development, and what has already been shown to the market. The main trend is the polishing of magnetic technologies with local heating approaches during the recording process and the creation of an infrastructure for the system as a whole, taking into account new inputs. But among the remaining manufacturers of traditional «screws» there is no unity in the vision of prospects. Those. the direction as a whole is one, but the ways to achieve seemingly similar goals are technically different.
Three Yokozuna Thermomagnetic Clash
In the near future, we will most likely see the following approaches to solve the magnetic recording trilemma. The tone will undoubtedly be set by thermomagnetic concepts. Two main ones are known today. This is HAMR — Heat-Assisted Magnetic Recording — recording with, in the literal sense, heating! And we remember that for purely physical reasons, in the case of heating, it is possible to magnetize a smaller area for recording a bit and do it with less energy consumption, i.e. along with achieving the desired density, it is easier for the head to work and it is easier to make it itself in terms of the selection of materials and electromagnetic characteristics. Promoted by Seagate. The manufacturer’s thematic video is not a song, but you can watch it.
The second approach is called MAMR — Microwave Assisted Magnetic Recording — also about heating, but in a different way, a spintronic, God forgive me, oscillator based on a very small analogue of what the masses understand as a microwave oven. Supported by WD and Toshiba. The video is much more informative and can be viewed at the link.
Both approaches, as we see, are essentially about heating, but in different ways, and the second method is compatible with helium, while the first is not very good, because. to strongly heat a helium sealed medium with a laser or similar, it’s like boiling condensed milk in a closed jar. Maybe, of course, some fundamentally new laser technologies will be brought up in the future, but so far this is so.
The next system element of HDD evolution will be thinner pancakes. Here, of course, everything is already extremely thin, but by reducing the thickness of a separate pancake, more of them can be placed in a typical case. Even +1 pancake is a significant increase in the total capacity, and against the background of an increase in density, it’s generally good. One of the main cuttings in this market, Showa Denko KK from Japan, offers platters capable of carrying about 2 terabytes per piece in the case of a 3.5-inch drive size. Eight pancakes in the assembly — the reality of yesterday, in the laboratories there are prototypes for 12! The Germans guarantee.
Why transparent? So pancakes are based on aluminum and glass.
The glass is tougher and no less than the main handle, but already, including the medical market — Hoya from the same Japan — is already promoting offers of glass options with a thickness of up to 0.38 mm! Both illustrations above are their creations. And here they are? They work well with optics, and for hard-glass pancakes for hard drives, a whole additional plant in Laos will be built in addition to Vietnamese and Thailand. Already guaranteed by Xinhua. By the way, almost the entire glass market for 2.5″ hard drives belongs to Hoya.
Helium (but not a vacuum, although there are such daredevils! — in passports for hard drives, the maximum working heights are indicated precisely for this reason) will become mainstream, although it has existed since 2012. It is less dense than air or nitrogen, and it is easier for the assembly to spin at high speeds in its environment. Well, it’s easier for the heads to move faster. We will talk about helium and vacuum later.
The heads, as it follows from the above and will be confirmed below, will be more innovative, smaller and possibly more like Conner Peripherals «Chinook».
The modern vision of multi-headedness from Seagate looks something like this (there is even an animation):
If the picture from Seagate is based on real plans and the firmware of such drives can distribute data to platter assemblies that work with physically independent head blocks, then in fact we will get a pair of drives in one enclosure with a RAID 0 similar operation logic. As a result, the speeds can increase in proportion to the number of head blocks, i.e., in this case, twice: both linear and 4K blocks. True, the speed of working with 4K at the level of 1-2 megabytes will not save anyone, but the linear ones will be quite nothing for the technology and sufficient for their niches.
Some futurologists, however, predict the possibility of implementing magnetic tunneling technologies based on carbon nanotubes containing nanomagnetic inclusions into hard drives. You can read the link. It looks something like this:
Nothing is clear, but very interesting (c). It is especially unclear how to implement it in practice.
And someone writes about storage devices based on holographic technologies and even DNA technologies! But this is all in the distant future, even for scientists, not to mention real samples.
With turnovers, the question is open. this part of the hard disk mechanics determines the requirements for the rest of the tandem and the capabilities of the interfaces. 15,000 rpm have been mastered, but how much higher altitude can be taken with stable results is not yet clear. It is important to understand here that the slightest imbalance of the assembly at 15000+ rpm will end the engine very quickly. On the other hand, due to physics, the speed of data flow on the inner and outer parts of the pancake at the same spindle speed will be well so different. It would also be nice to understand — but will a thin glass pancake or an assembly of eight such plates withstand vibration at high speeds without destruction at all? And we have not yet touched on the drive, which would also be nice to keep up. In general, here is a complex task, as it was said, for the entire tandem and the turnover in it is in last place.
Invention and rationalization page
Tandem… good word. There is even a patent on this from 2004. There would still be diagonal blocks of independent heads, pancakes smeared with nanotubes, helium, tiles, a compatible heater and there will be a complete steampunk. True, with reliability what will happen — it’s scary to think.
Tiling = SMR
This is the time to remember about the tiled record — the technology has long been in circulation, but there are nuances that do not allow it to be implemented at home. In English sources, this disco is known as SMR (Shingled Magnetic Recording). The essence is approximately the following — on a plate of a standard typical physical size, record as many tracks as possible more densely. And what about tiles? And the paths are proposed to be partially overlapped. Naturally, in order to write thin tracks, like sin in an army joke about a mosquito, you need to have a head of an appropriate size, and before that, you need to make it such a technology with the necessary magnetic characteristics. But it’s hard to make completely microscopic heads individually, but you can record several tracks at once. Conventionally, the ratio of a conventional traditional track on a hard disk platter and a track in the case of tiled recording technology can be visualized as follows (hereinafter, we use infographics kindly published by Microsemi):
Blue — writing head and traditional track, green — the width of the innovative track reader. Why one track is drawn — see below. this is at the same time the key joint of the idea.
This is what it looks like written down. It is conditional, because in practice, almost black magic begins there for the average consumer, and we enjoy complacency that, like, we know how it works. Hello Mr Clark.
In general, the head plows, i.e. magnetizes several tracks at once. And everything seems clear enough, but still there is a problem. Due to the physical features of the technology, this charm is convenient to use only for sequential recording, because …. overwrite tracks selectively and individually, all of a sudden, it is impossible. More precisely, it is possible, but for a random entry there will be a serious complication of the procedure, which we will deal with more carefully. Those. in principle it is possible, but in practice it is impossible.
So, the head of the SMR disk recorded in several tracks will erase it in the same way — i.e. collectively, because The recording head is also the erase head. Such a harvester with a wide table.
It looks like this and according to the version of sometime Hitachi — below.
Those. to record an orange fragment, it is necessary to physically overwrite the tracks in the width of the recording head without fail. To complete the task, you need to read the fragment, somewhere at the level of some DRAM buffer, decompose it into the necessary and unnecessary. Attach a new piece of data to the desired one. Collect to a heap and send through the head to a place for a heap recording. It will be good if the new fragment as a whole is smaller than the erased one. If not, then you will have to append fragments (which will cause problems, which are discussed below) or, ideally, to a place after the physical end of the data on such a disk. Purely in theory, the controller can look for where there are empty seats, but in reality this will cause system paralysis. Of course, MSMK in combinatorics now do not understand what the problem is. But from a mathematical point of view, it will not exist — logically, this is all easy. But to crank up the idea at a specific electro-mechanical level will take physical time and calculated resources, plus possible costs for error correction. Those. on a random recording, «vertery» will be even worse than a regular HDD. Seagate has tested the Archive 8TB SATA3 drive on Debian. The result of a random entry looks something like this:
Chilling peak of failure to fierce 3! (exactly three, 1+1+1 pcs.) We see IOPS after exhaustion of buffers on a random write load with a queue depth of 1, however, more than a minute later, which somewhat reduces the level of drama, but the upper peaks are objectively not a fountain.
If you simply erase random data (they counted, removed the unnecessary, wrote back the necessary balance), then you get bald spots, which for normal recording in the future must be processed and compacted by a process similar to defragmentation and background. This is very similar to TRIM in an SSD — and here and there it is necessary to prepare a field for direct recording to the possible width without additional gestures in the process, but due to the mechanical nature of the hard drive, this will not work quickly, and the total load will increase greatly — such an analogue recording amplification. Ideally, in general, everything should be compacted in such a way as to write a new one to the physical end of the existing one, but this is connected with the physical processing of large data arrays with all the consequences. The state of a disk, when a new write goes to a clean space in general or to a clean space prepared after compaction and garbage collection, is sometimes called FOB — fresh out of box or new from the box — and these are, in fact, ideal conditions for this kind of recording. There is some analogy with SSD.
The picture required processing with a file, and that is why ordinary areas were added to such disks for transit-buffer purposes, working on the principle of one track for the entire width of the head. Those. conventional technology of traditional drives. Logically, this is an analogue of SLC caching in TLC and QLC solid-state drives, only in our version service information about what and where is deleted and so on can also be stored there. For an even more effective solution to the issue, DRAM buffering was also brought up. We added mathematics to the firmware and it became more or less — i.e. as long as the buffer exceeds the typical average task, the system does not particularly feel the brakes, the disk does not “bottleneck”. This is exactly what is shown in the illustration above. In that case, the drive could have as much as 256 megabytes of buffer, but, unfortunately, a specific tested modification was not indicated. The general characteristics of the «piece of iron» according to the link and it seems that the manufacturer tested the maximum configuration.
Naturally, there are other tricks to solve the described problems, logical zoning, tape organization, and the like, up to modified firmware for specific tasks! But due to their main problem — the physical root cause, such approaches only smooth the corners.
All of the above clearly hints that, despite the adult storage volumes of SMR drives, they, due to technology, are niche for specific types of loads, but in these niches they act just in a targeted way. For example, a linear multi-threaded write-read without or with a minimum of random operations. A good option would be a data center focused on reading not very hot data. By the way, if the database of some social network is placed on an array of such disks, then guess 3 times whether someone will actually delete random, say, photos from the array if the user clicks «delete» in the profile? Or will such photos simply cease to be displayed to users, but will physically remain available to the administration in the array against the background of the prospects for a drop in performance from the associated disk movements? It’s easier to bring one and a half additional disks than to slow down the array with random operations followed by data compaction. To put it more down to earth — such a data center will be almost a write-once center. This is partly why nothing can be completely removed from the Internet — in some cases this is actually inconvenient to do, and given the current prices for growing drives in volumes and the absence of floods with fires at HDD manufacturing plants, it is generally not economically feasible. Another good niche segment would be streaming archiving, such as surveillance cameras, audiovisual broadcasts, archiving critical data that does not need to be overwritten often and accidentally.
A minute of conspiracy theory
If you dream up, it’s convenient to record a month of conversations of all users of a conditional mobile operator on such an array, then in the transit area using technologies that have not only not been secret for a long time, but also run in by google on youtube, for example, translate all this into txt for convenient search or analysis by keywords and neatly add to the free part of the array. The sources can be safely rubbed in their entirety, providing the FOB-record of the next month. And you can not rub! Then the motherland will not only hear, know, but also remember very well! The report is over, comrade. colonel, i.e. this is all fiction, of course, and any resemblance to real-life technology is purely coincidental.
And why do artiodactyls need fur musical instruments?
As a result, tiled recorders should be used «adjusted for wind strength and trunk temperature.» This is done according to the situation by completely iron crutches of the HBA type, which are responsible for the specific I / O logic of working with such drives, executing special sets of commands. In RAID, such drives can and should also be assembled, however, understanding the specifics, but this is not the topic of this material — the main thing is that you now know a little more in general. Those who wish can dig deeper at the request of DM (independent storage drives), HA (host support) and HM (host-managed) SMR, but a SOHO user is unlikely to encounter this.
TDMR as a forerunner of the thermomagnetic future
Separately, it should be mentioned that there is and even began to be sold technology TDMR — Two Dimensional Magnetic Recording in the form of 14 TB products from Seagate. Here they are trying to solve the trilemma head-on — by reducing the width of the tracks and the size of the recording head. The unattainable ideal is 1 bit per magnetic grain. It looks something like this and there are not so many explanations on the Web, which is surprising.
True, reading comes out with garbage, which, in turn, is solved by a head with several reading elements — the reliability of reading from neighboring tracks improves and, in general, the signal is more legible. The writing head continues to be one. In the general result of the introduction of technology, the recording density slightly increases — by 10 percent. But this is not important. Multi-head reading of a wafer per pass by several readers will obviously become mainstream due to the need for normal retrieval of densely recorded data. A good start, but the complexity of the mutual arrangement of elements, the accuracy of their execution and positioning in work is growing, and stability of indicators over time is required.
In any case, it was this stage that was critically needed before the introduction of the above thermal innovations, because. As a consequence, the latter will have precisely the problems of shallow reading, which are preventively solved in TDMR. At least, they began to be solved in practice.
Very interesting about TDMR in a rather old, but one of the most complete on the theory of the issue link.
But, perhaps, the speeds of promising hard drives will be a secondary issue — we’ll talk about this later. The primary will be the volume and … the preservation of reliability.