Forum Posting: on use of Solid State Drives and How Memory is Used in your Computer

amirm

Banned
Apr 2, 2010
15,813
37
0
Seattle, WA
From time to time, I write posts elsewhere that I think might have value in our forum. I will copy them verbatim. If there are questions about the context and follow ups, please ask.

This one started by someone asking about using Solid State Disks (SSDs) instead of traditional hard disks and how big it needed to be.

This was my initial answer:

A few things to do if you get an SSD. You want to minimize writing to SSDs as that slows them down and reduces their life:

1. Turn off virtual memory (paging) in the OS. Get plenty of memory so that you don't need this.

2. Move the windows temp, and your browser download directories to the secondary drive, assuming it is always connected to your PC.

3. Check to make sure your SSD has a TRIM command. If so, then by combining it with Win 7, you reduces the impact of system writes.
 

amirm

Banned
Apr 2, 2010
15,813
37
0
Seattle, WA
This was followed by a question regarding SSDs slowing down with writes and my answer to it:

On 1293125485, Ernie Bornn-Gilman said...
Here's a surprise:

I've never heard of the number of read or write activities as being any kind of issue with spinning platter hard drives,* but it's an issue with something that doesn't move? That is really a surprise!

Do you happen to know why this is the case, or is it no worse than with hard drives,* but we just never talked about it before?

No, the problem is much worse than hard disks.

Some introduction to NAND flash memory is in order to better understand the problem.

NAND flash has pages that are small, in the order of 2K to 8K with larger numbers more fashionable as they get denser and denser. To read the device, you access one full page at a time. This is fine actually since these block sizes are pretty close or match identically the file system block size. And at any rate, are how we access hard disks.

NAND then has a notion of a "block." A block is N number of pages. Today, N=64 to 128 and is the capacity of an entire row in flash memory. Assuming a 4k page, and 128 pages/block, our block size = 512 Mbytes.

Now, this is where the technology deviates heavily from hard disk. In hard disk, if I want to write 4K, I tell the hard disk controller and it will do that for me. Not so with NAND flash. What you must do, is first erase a *block* and then write your 4K data. Yes, I said a block. You cannot just erase a 4K page and write new data to it. BTW, if you do not erase the block first, you can't correctly write to it as write operations can only push a bit to 0 but not to a 1.

So far, we can see that writes are slower than reads as the latter is a single command whereas writing requires a block erase, and then followed by the write operation.

We now get into a nastier problem called "write amplification." Imagine if we have a block that has 64 pages but we have only written 6 of them in a previous write operation. To write 2 more pages now requires us to perform the following functions:

1. Read the entire block into some buffer memory.
2. Erase the block.
3. Write back the original 6 blocks.
4. Write the new 2 pages.

So instead of just having 2 write operations, we have 8, plus an erase cycle (4X amplification)! If we did not perform the above function and just erased the block, we would lose the previously stored data in there. So you see that the act of writing to a block, made subsequent write operations slower.

It gets sillier than this. If I erase the original 6 block, the SSD controller still follows the above scheme when I tell it to write the new two blocks even though there is no useful data there to preserve! Why? Because when you delete something in the operating system, nothing is ever communicated to the storage device. The operating system simply remembers to use that data when needed and doesn't care that stale data still exists on the drive (this is why it is possible to undelete a file).

This gets us to the TRIM command. This is a way for the operating system to tell the controller that it has deleted something in the block. That allows the controller to read the block, erase it, and then write back what is still good in it. This way, when the next write command comes for the empty space, we can just write it without having to erase or preserve anything.

Alas, if you hit the drive with back to back writes, TRIM doesn't help you. But if you spread out your write operations, it likewise spreads out the load over time, reducing the latency of write operations.

I should note that companies are coming up with clever ways to reduce the impact of writes. It is beyond the scope of this brief explanation to state how they work. But think of what you can do if you have spare area and can temporary put written blocks there and use them there instead of where they needed to go.
 

amirm

Banned
Apr 2, 2010
15,813
37
0
Seattle, WA
And the one that made me explain how the operating system uses memory and manages traffic to and from hard disks/SSDs:

On 1293284888, Dawn Gordon Luks said... When you say to get plenty of RAM to use SSD drives, how much memory do you suggest? Is 4 GB enough, or do you recommend 8GB?

Good question. Short answer is as much as you can afford! Long answer follows :).

The ultimate goal of your system is to “feed” the CPU with data and instructions so that it can perform the functions you ask it to do. The CPU uses internal registers for fastest access. But there are only a handful of them. When it runs out, data is fed to it from the CPU “cache.” This is high-speed memory that sits inside or (in the older days) pretty close to it. Cache is about 4-10 times slower than internal registers. Once we run out of CPU cache space, we then go to main memory which is 10X slower than the cache. Last line of defense is your hard disk/SSD which is probably 100X slower. So where possible, you want to maximize the size of the upstream space as to avoid running out and having to go to the next level down and put up with its much slower performance.

Next we need to look at the how the operating system utilizes memory. You have heard of the term Virtual Memory. That phrase means that the memory that applications use does not correspond necessarily with physical memory in your computer. As applications load up and demand memory, the OS first looks in main memory to see if there is space. If there is not, it will then “make space” by utilizing the hard disk/SSD as an extension of your system memory. So if you have 4 Gigabytes of memory and 500 Gigabytes of free disks pace, you can actually run 504 Gigabytes worth of applications and all of those apps think they are running in main memory!

Of course, that hat trick has a price. As you run out of actual memory, the OS attempts to shuffle parts of each program in and out of memory from/to your hard disk/SSD. Recall what I said about this subsystem being 100X slower than main memory. The more you force this “paging” activity to occur, the slower your system will run as anyone who has tried to run Vista in a 2 Gig machine recalls. Your goal should be to always buy enough memory as to stop all paging activity due to insufficient space to run your applications.

We now get into a more advanced topic and that is disk caching. Since we know the hard disk subsystem is so slow, in the last 20 years, operating systems have sported disk caching. What happens is that when you read a file, the operating system first copies it into main memory. When you are done with that file, the operating system still hangs on to that data in main memory. The idea is that if you attempt to access the file again, a copy already exists in main memory and we avoid a second disk access and thereby, system performance is sharply increased for the second access.

Think of compiling your Crestron SMPL Windows program. Crestron has a device data base that it reads every time to build your and compile your program. The first time you compile, the operating system reads the database from disk. The second time, assuming you have sufficient amount of memory, that data is cached and work gets done faster.

For the above to work, there needs to be sufficient amount of memory to be used to hold data read from hard disk. Think of how large your hard disk is and the sizes of your files. The above Crestron database file alone is 120 megabytes. In addition to caching that file, you want to cache the crestron tools themselves, any other project files and images, etc. It does not take long to get to 1 Gigabyte+ worth of stuff to be cached. My little laptop as I type this, has cached 1.8 Gigabytes worth of stuff and this is for every day work!

There is also a related function above that finally connects this to SSDs :). The system memory is not only used to cache read operation but also as a way to delay or eliminate write operation. When you write to your file system (hard disk or SSD), the operating system only copies the file content to main memory. It does NOT push them out to the hard disk. This sharply increases write performance as the application does not have to wait for the hard disk at all.

There is no free lunch however. At some point, the write operations can overwhelm the amount of free memory you have and the OS will then attempt to free up space. It does that by getting rid of some of the above read buffers and/or writing some of the data that it had cached to its final destination – the hard disk/SSD.

Now, it turns out that there are a lot of files which get created but never last. The process of compiling a program many times includes creating a temporary work file which gets deleted at the end of the compile process. If you have enough memory, that file gets stored in system memory during operation and once the app deletes it at end of the process, that space is freed with no obligation to write it to the hard disk!

Let’s put all of this together now. By having ample memory, you can disable paging for the Virtual Memory subsystem. If you do not have enough memory to run all of your apps, your programs may crash or error out as they run out of space. So if you disable paging, it is absolutely essential that you have plenty of memory in your system to avoid this issue. How much memory you ask? Well, Windows tells you that but of course, without proper explanation, no one knows about it. The information is in the Task Manager (hold the control+shift+Esc keys down in that order and it will pop up). Here is an example for when I was testing Crestron SB on my laptop:



Look for a line on the line which says “Commit.” You see two numbers separated by a slash (“/”). The number you are interested in is the first one. This is the total amount of memory all the programs in your system have demanded to be allocated. In my example above is 2.9 Gigabytes (2903 Megabytes). As an aside, the number to the right is the current limit with is equal to amount memory+the page file size.

It is critical that you look at that number when you are running everything you are ever going to do on your machine. If you run Photoshop with large images, you better start it and load a big image (just starting it doesn’t force it to allocate all the memory it needs). In the system above, I was only running System Builder plus Outlook, Browser sessions, etc. so if that is all I need to have in it, then 3 Megabytes of RAM is sufficient.

Here is the same display from my desktop machine which was just running SB and nothing else:



If you look to the left, you see I have 8 Gigabytes in that machine and the amount of VM in use under Commit is 2 Gigabytes. My paging file there is on my hard disk not the SSD drive and it appears to be completely unnecessary and too big at 15 Gigabytes for this workload.

Once you take care of the amount of memory your applications need, you then need to have extra memory for to reduce disk I/O per earlier description. It is hard to say what number this needs to be as it is highly application specific. But one thing is for sure, the more the better!

Hope this wasn’t too much of a “deep dive” :). Note that in the interest of keeping this post understandable, I have oversimplified some concepts.
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing