Gallery Guides About Me

- Computer Speed Tips Guide -

Improving Your Computers Performance

In this article I'll be writing about ways to make the best use of your system.


Before we go any further, i must give the standard disclaimer.
I can not be held liable for any event that is derived from the use of this article. Nor any damages or loss of data. If you're unsure of anything below, do not attempt it, or thoroughly backup your data before hand.
First a few items of interest need to be covered. These will help you understand what technically is occurring in the background, and so you can understand the "why," which will allow you to make the best choices later.


Memory Management and Virtual Memory


There are two types of memory managers, we'll cover the OS one for now. The Operating System (OS) has its own form of memory management that also works in coordination with the hardware Memory Management Unit (MMU).
From OS's view, there are plentiful amounts of contiguous memory (non-fragmented memory). This is done by the use of Virtual Memory, which uses virtual memory address space. Memory addresses are basically hexadecimal/binary numbered pointers to sections/chunks of memory, in most modern OS's each address references a byte of memory. These virtual memory addresses are then combined into larger units called Pages. Pages are then converted to real memory addresses (addresses that have been generated from the existence of "real" memory, aka ram) by use of a lookup table, called a page table. However, this is only for data that needs to be stored in ram. If the data is needed/requested by the CPU (main processor of the computer), a hardware unit called the Memory Management Unit (MMU) will translate this for the CPU on the fly.
Now, by the nature of virtual memory, there will generally be an overflow, as there are more virtual memory addresses available than real memory addresses from physical memory that they can be mapped/translated too. The result is the creation of the "Paging" file, also known as swap, page, virtual memory file. This file is in essence a reserved chunk of hard drive space, where the space can hold "paged out" virtual memory, where the paged out pages are usually judged by algorithims like LRU (Least Recently Used).
Now, you may be asking, "What is the point of this?" This arrangement facilitates in the creation of programs and in the running of all software. If you mapped programs directly to ram you would run into all kinds of problems, problems which would increase the complexity of programing software. With virtual memory you don't have to worry that you'll allocate too much ram and cause a system crash. Instead you simply allocate a large amount of virtual memory, which could be held in the paging file, and be paged in once it is needed. Basically, the system gives developers, OS and Application alike, more head room.

So, what does this teach us?
1.)The page file is in essence an extension of your memory subsystem.
2.)Its performance is important to your memory subsystems performance.
3.) If object is taking up a lot of physical memory and another object needs more physical memory, the least recently used pages will be paged out to make room. Though, it should be noted that not all data contained in the page file actually needs to be assigned an address space. As data can, in fact, reside there, with no addressing, but can later be assigned addresses once some are free. This can allow for a very large allocation to occur for a program, that exceeds even the total virtual address space.

Clusters



Back in the early days of computing, an OS would reference specific blocks on a hard drive, as all hard drive manufacturers used a set number of blocks (blocks are intersections of a track with a sector) for all cylinders/tracks. This made it easy to reference each block, because it could be predicted how many sectors will be in a cylinder/track. However, this technique is inefficient as the density varies as you get closer to the inner cylinders on a head. Using the old method would waste space available for storage on a platter. Now we use zone bit recording, where the number of blocks vary with track/cylinder location on the head. More on the outside tracks and less in the inner tracks. With this system, a large overhead is created if you wish to deal directly with blocks.
Now, because of this change a modern OS has defined an abstraction called "clusters." Clusters are in a way a higher unit for storage space, similar to pages discussed above, and are basically groups of contiguous blocks. Here, you have the operating system viewing everything as clusters, which are groups of bits/byte/KiB. The key is to understand that the OS only sees clusters. With this you can define the "Cluster Size," which is the minimum chunk of data any file can use. If a file is under your cluster size, it will take up the entire space of the cluster. If a file is over the cluster size, the file will be spread across multiple clusters, and a table of sorts is used to keep track of what clusters hold what data for what files. To give an example, say you have a cluster size of 4 KiB, yu save a 2 KiB file to the drive, this will result in a total of 4 KiB being written to the drive, as the minimum file size is judged by the cluster size. Now, say you still have a 4 KiB cluster size, but now you're saving a 16 KiB file, this file will now be spread out into 4 different clusters, each 4 KiB in size.
So, if the OS only sees clusters how does it write/read from a drive? The controller takes over that job. The controller for the storage device translates the clusters into specific sectors on a drive.

So, what does this teach us?
1.)The smallest space a file can take up is defined by the cluster size.
2.) A file over the cluster size will be split into multiple clusters.
3.) The smaller the cluster size and the larger the file, the more clusters will makeup that file.
4.) If a file takes up multiple clusters, a table is used to locate each cluster that contains part of that file.

File Systems


In the Windows world of things, there are 2 main types of File Systems (FS) you can. There are FAT and NTFS, with subsets/subversion's of each. FAT being the simplest of the two contains the bare basics one might consider for a FS. A file systems' main function is to provide a base for writing/reading data from a storage device. As discussed above, the use of clusters is used for most modern day application of a file system. This makes the file systems main function to facilitate the reading and writing from clusters, however a FS may also make use of other data in order to provide advanced features like security/permissions and data for each file (aka dates and such). FAT stands for File Allocation Table, and is the main function which creates a "map" of sorts that defines which clusters are used for what. If a file is larger than the cluster size, the first cluster will reside with the file in its directory along with a pointer, this pointer directs the OS to a specific row in the FAT that defines a cluster where part of the file is, and then may define another pointer to another row, it will do this until it runs into a row with an end pointer that states that the row is the end of the file. This method works quite well, however this table is a bit inefficient with larger files. NTFS has the MFT (Master File Table). The MFT does the same job as the FAT, but also contains a attribute metadata for each file. However, NTFS has more to offer than just the MFT, it also provides things like file level compression that can be uncompressed on the fly by the OS. But it is important to notice that even though NTFS is a superior file system, it has greater overhead than FAT. And the benefits overcome the overhead as you get larger volumes. This means, that in effect, you will get better performance with a 2 Gig volume formatted with FAT than with NTFS. The performance benefits gained by less overhead in FAT starts to drop around the 2 gig mark though, in which case NTFS would be a better choice. This is also true, because FAT's required cluster size increases as the volume increases, after a certain point is reached it starts being too taxing (around the 2 Gig mark).

Fragmentation


So, what happens when you save 20 different 4 KiB files, labled 1 through 20, onto the hard drive, delete files 10 and 15, then save a single 10 KiB file to the drive? First, you allocated and used 80 KiB of space on the drive, then you deleted 2 files, leaving 8 KiB of free space in the middle of that allocated space. After that, you then saved a 10 KiB file to the drive.
The result is that the 10 KiB file will use that 8 KiB space, in the middle, for most of itself, then skip to the end until it reaches more free space and then save the remaining 2 KiB of itself. This basically means that the file is using non-contiguous allocations of clusters. This is called fragmentation, where a file is split into multiple chunks as a result of writing itself to the first chunks of free space it sees, to avoid wasting space. Since the hard drives have mechanical latencies, which get worse the more it has to move around to read data (excluding Solid State Drives), then the drive will become slower as the mechanical latencies increase for the same file as the file becomes more fragmented. And, as you may figure, this will slowdown overall system performance.


Registry


Windows is in a way a massive application that supports the use of running other software within it. When you create something like an OS you msut also create an efficient means of storing settings and data for both the OS and its applications. This is where the registry comes in. The registry is a collection of database files: System, Software, SAM, UserDiff, Security, Default and NTuser.dat. These files hold all the settings for the OS's operations and applications. To open and edit the registry you can go to Start -> Run -> type in regedit. We'll discuss some changes below.

System Setup



Ok, time for setup guidelines that we can follow by using this information.

Paging File



If you right-click on "My Computer" and go to "Properties", then "Advanced", then under Performance click on "Settings." From here go to the "Advanced" tab, and under virtual memory click on "Change."
Here you'll be able to set the size of the page file that we discussed earlier. Now, here is where we get into myth-land. There is a lot of talk around the web that you can simply disable the page file. Now, while this is certainly true, there is no performance to be gained from this, and more than likely performance to be lost. As discussed, items that are LRU will be paged out of the physical ram and into the page file. The result is that there is not more physical ram free for actual use by the OS. By disabling the paging file, all you're doing is forcing every single piece of data that was loaded into ram to stay there, even if it has not been used in quite some time, or the data may just be completely removed from ram. This causes two things to happen.
1.) You increase the amount of actual physical ram you're using for the same task.
2.)If more ram is needed the OS will completely remove the data from the ram, instead of paging it out to the page file.

This results in a page fault being raised if the removed data was not found in ram, if so then the data must be re-transferred from the drive. If you have a page file, instead of the data being removed completely from the virtual memory space, it would instead be paged out to the page file, then if the data is requested be pulled from the page file to the ram. As you can see, this gives the same effect in the end, yet you're then loosing the primary purpose and usefulness of the paging file. The purpose of the page file is to use it as a working space for programs. If you disable it you'll run into many more page faults, more ram usage as more "useless" and "least recently used" items are stuck there. This then gives less free ram, which gives less room for the OS's file cache, which uses ram, and so also impacts performance elsewhere. Not to mention, shrinking the room with which programs get to work in.
All in all, the page file is a needed element within the modern OS and should not be disabled, no matter how much ram you have.
Now the question is, how do you set it?

The page file needs to be at least as large as your largest memory requirement. To find this you need to sit and use your computer as heavily as you possibly will in your environment, then watch the "Peak Commit Charge" in the Task Manager under the "performance" tab. This is the max value you should set your page file to. This value may be a bit excessive, but there is no harm in setting the page file too large, only too small.
After you have this value, you'll want to convert it into megabytes (commonly it is in KiB within the task manager). Then you'll want to set the page file to your value, set it the same for both the Max and Min. This will prevent cpu utilization for a dynamically changing page file, and more importantly, it will prevent the page file itself from becoming fragmented, or split into multiple chunks as it grows and shrinks hurting overall performance. Setting the drive to the same Max and Min value will prevent this fragmentation from occurring. This is also the reason why you don't want to create a page file "after the fact," the page file, if placed on the system drive, should be the first thing you create after installing your OS. If you create the page file later on, with a drive that does not have 100% perfectly packed storage, the page file will become fragmented right off the bat, however, this is still fixable if you use the "page defrag" app mentioned below in the "Defragement" section.

Next we can consider placing the page file on a separate drive than the main system drive. Placing the page file on less accessed drive will help increase performance in times of concurrent system drive and page file access. It is not a good idea to place the page file on a separate partition on the same drive. Because drives use cylindrical platters for storing data, the speed at which certain parts spin varies, faster on the outside and slower toward the inner parts of the platter. Because of this, as data is stored, it is stored from the outside in. If you create your main system partition with a specific size, it will be created from the outside in. After then, if you create a partition for your page file, it will be placed after the system partition, and you'll inevitably be creating it more within the inner realms of the platter, already hindering its throughput performance, compared to placing it within the same system partition. So, instead, keep in in the same partition, or on a separate drive. Now, if you do have a second drive, you may remember what i said earlier, FAT is the king around 2 gigs or under. So, if you do have a second drive, make a partition 2 Gigs or under (the first partition on the drive), and instead of using NTFS, use FAT, the decrease in overhead will increase its performance.

You may also want to consider placing the page file on a raid setup. This will provide superior performance. You can look into setting up raid with my raid guide. Just be sure that if you do decide to go this route to not place the page file on a raid setup that gives penalty for writes (aka 3,5,6) as this will give unpredictable (usually not good) performance.


Defragment!


Now, this may be the simplest thing you can do. Defragment your hard drive. To do this within windows go to start -> All programs -> Accessories -> System Tools -> Disk Defragmentor. However, even though that tools is satisfactory enough, it does not do too great of a job. I Really would recommend using tools like Disk Perfect and Diskeeper, which are excellent retail tools that do a better job than the one built into Windows. It is worth noting, that Windows own disk defragger does not defragment such important files like the MFT and others that reside outside the system partition, but Diskeeper and Disk Perfect do. Even better, there is also a free tool from Sysinternals that does defragment all important system files, excluding the MFT, but including the page file. You can get it here, it will require you to restart your computer, as access to the page file is impossible while windows is running.


Cluster Sizes


As mentioned earlier, cluster size determines the minimum size a file can be, and files larger than the cluster size are split into multiple clusters, the amount of clusters being, clusters=[file size]/[cluster size]. This creates an interesting delimma, what should the cluster size be set to? If you create a large cluster size, you'll end up wasting a ton of disk space if a lot of your files are smaller than the cluster size. But if you have a lot of files that are over that size, you'll be reducing the amount of clusters they must be split into. What this means, is:
-Small cluster size, the more files will be split into multiple clusters. But the better your storage efficiency will be. However, the easier your system will become fragmented.
-Large cluster sizes will reduce the amount of clusters used per file. This will result in less clusters being used, which helps reduce future fragmentation.

The choice on which to go for is yours, as it is dependent on your environment and uses.


Compression


Most will probably not consider NTFS's compression as being a performance booster. However file/volume compression within ntfs can compress a file and decompress it on the fly. When NTFS decompresses the file it must first load the file off the drive, then it decompresses it. So, what does this mean? Well, which would be faster, pulling a 180 MiB file off a mechanical latent hard drive, or pulling a 20 MiB file off the drive? Hard drives are the main bottleneck in transfer/throughput performance. When you use compression, it is like you compressing a large file on a server, then having the client download the smaller file, and then uncompressing it. You will be able to transfer the same amount of data in less time. Only one catch though, processor performance. The dynamic compression and decompression of NTFS will increase cpu utilization. However, this option is almost a viable solution now a days with dual or even quad core processors, where the one core could decompress and compress the files on the fly while you're busy working using the other(s), all the while achieving a technically better throughput. It is, at the very least, worth trying out.


Disable 8.3 File Name Creation


Back in the days of DOS, and the use of FAT12 and FAT16, there was the requirement to use a special format for file names. This is known as the 8.3 file name format, I.E. 8 characters a period and 3 characters to identify the file. For example, 12345678.123, or qwertyui.zip. These days there is an extension system in place that allows for up to 255 characters to be used in a file name, but for compatibility purposes these long file names are truncated to the 8.3 format and then written along side the long file name. So, if a file ever need be read by an older OS or written to a an older FAT FS, it can do so. However, if you know for a fact you'll never be dealing with DOS (or its 16 bit applications), or a FAT FS below FAT32, you can disable this name creation.
There are a multitude of different methods you can use for this, but the simplest method is:
1.)Go to start, then Run, type in regedit
2.)Navigate to HKLM -> System -> CurrentControlSet -> Control -> FileSystem
3.)Change NtfsDisable8dot3NameCreation to 1. You'll then have to restart your computer for the change to take any affect.

Alternatively i have made a quick .reg file that will automate this for you. Simply download the file (very small download), and double click it, a menu will pop up asking if you want to add the entry, say yes, then restart.



Disable Last Access Update in NTFS


Each time you view/list a directory or list of directories, the "Last Access" time stamp is updated for each. If you wish to speed up the listing of directories, you can disable this. To do this follow these instructions:
1.)Go to Start, then Run, type in regedit
2.)Navigate to HKLM -> System -> CurrentControlSet -> Control -> FileSystem
3.)Change NtfsDisableLastAccessUpdate to 1. If you do not have this entry (some systems may not), you can add it by right-clicking -> New -> REG_DWORD -> Name it NtfsDisableLastAccessUpdate -> Seti its value to 1. You'll then have to restart your computer for the change to take any affect.

Alternatively i have made a quick .reg file that will automate this for you. Simply download the file (very small download), and double click it, a menu will pop up asking if you want to add the entry, say yes, then restart.



Startup Items


When you install software, hardware, or even just use your browser, startup entries will be added to your system. These entries define drivers that need to be loaded, background services, and most useless of all, taskbar/system tray items (items which load into the bottom right side of the taskbar). Most of the time these items are useless and can be taken away. This will both reduce the idle ram usage you may experience, but will also decrease the amount of time it takes your computer to start. Now, there are, as usual, a few ways you can go about this. The first is to use a builtin utility called msconfig to launch it go to:
1.)Go to start, then run, type in msconfig (this utility does not exist with windows 2000, if you have windows 2000 use the other tool i'm about to mention).

From here, you can go to the startup tab, under most cases (meaning near 100% of them) you can go straight through and disable all of them, unless you have some preference for one or two of your programs to load at startup (aka aim and such). After you're done, click apply and restart.
However, you can also go to the services tab, check the "Hide ms services" and look through those. Normally, you can also remove all of these entries too. However, some programs, like 3ds Max, Photoshop (or most adobe products for that matter) usually have a licensing service running in the background (adobe's is called, "adobe LM service"), so do your research before you remove any entry that is not obvious what its purpose is.
You may also keep the hide checkbox unchecked and disable services you know you don't need, but again, do your research before disabling any that you're not sure about. As few examples, in most cases you can disable the indexing service, which simply makes your files faster to search through, but at the cost of random performance drops as the service indexes the files periodically. One service that may be recommended by some to disable is the system restore service, however i would strongly advise against this, as it backs up important system files and your registry, if you run into a problem you can use system restore to fix it/restore the settings/files. And you can even use the "disable all" button, but if you do that, you'll still be able to start your computer, but not much will function :-)

If you have Windows 2000 (which does not have msconfig), or you wish to use a more advanced tool, i would recommend giving Sysinternals Autoruns a try. I must warn you though, this is an advanced user level tool, you can render your system inoperable by the simple click on a button. However, no other tool I've run across gives the user as much control. It covers everything from services, browser helper objects (BHO), drivers, regular startup items, and much more.



Msconfig




Autoruns




Drivers


The most simple "tweak" you can do to your system is to update your device drivers. These include your chipset, and video card drivers. Keeping your drivers uptodate will not only give a possible performance increase, but may also resolve any conflicts and/or technical problems that has been occuring.




All Content Copyright Jarrod Christman - 2015