Memory Basics

Jus' the facts ma'am

On these next few pages we will discuss how the system memory works, how memory is accessed, and how it relates to system timing. The speed of the system memory is of major importance to overall performance and there are many different factors that influence the "real-world" speed of memory.

The Memory Controller and Access Times

The memory controller, a hardware logic circuit found in every modern PC chipset, generates the necessary signals to control the reading and writing of information from and to the memory, and interfaces the memory with the other major parts of the system.

When memory is read or written, a specific process is used to control each access to memory. This consists of having the memory controller generate the correct data signature to specify which memory location needs to be accessed. It is then made available to the data bus to be read by the processor or whatever other devices request it.

Memory, unlike a hard or floppy disk, which is laid out as a long string of individual bytes, is actually organized via a matrix, making it more random, actually rather like a ledger with rows and columns. To access a byte of memory, the memory controller first identifies the row that is required and then the column. These are determined from the memory address provided to the memory controller on the address bus (a part of the memory bus. )   The length of time it takes for the memory to produce the data required, from the start of the access until the valid data is available for use, is called the memory access time. It is normally measured in nanoseconds (abbreiviated ns.). Today memory usually has access times ranging from 5 to 70 nanoseconds. This is the speed of the DRAM memory chip itself, which isn't necessarily the same as the real-world speed of the overall memory system.
Using the aforementioned matrix allows for the creation of DRAM chips with fewer pins. But, since half the address is sent at a time, it slows down the access since the row and column addresses cannot be sent simultaneously, performance is traded off for cost.

Asynchronous and Synchronous DRAM

Older conventional DRAM, of the Fast Page Mode or EDO type, is asynchronous. Simply put it means that the memory is not synchronized to the system clock. The access and throughput of the memory data are not coordinated as to time. Asynchronous memory works fine in lower-speed memory bus systems but tends to drag in high-speed (66 MHz and above) memory systems.

A newer type of DRAM, called "synchronous DRAM" or "SDRAM", is synchronized to the system clock. This type of memory is much faster than asynchronous DRAM and can be used to improve the performance of the system. It is more suitable to the higher-speed memory systems of the newer PCs.

Then there is PC100 SDRAM. This too is synchronous DRAM but is guaranteed to run at 100MHz. SDRAM of the earlier type can also run at 100MHz if you are fortunate but PC100 is of a more refined manufacture to assure its ability to run at this frequency. But you will learn more about these different types as we progress.

The Memory Bus

The memory bus is the circuitry used to carry memory addresses and data to and from the system's RAM. In most PCs, the memory bus is shared with the processor bus, connecting the system memory to the processor and the chipset. On the newest mainboards it is also interconnected with the advanced graphics port as well. The memory bus is a system with two parts: the data bus and the address bus. Most often reference to "the memory bus" is usually referring to the data bus, which carries actual memory data within the computer. The address bus is used to select the memory address that the data will be read from or written to.

The bandwidth of the data bus is how much information can flow through it, and is determined by the bus width (in bits) and its speed (in MHz). Let's see if we can make that more clear.

If think of the data bus as a freeway system with cars traveling back and forth on it; its width = the number of lanes and its speed = how fast the cars are traveling - think of MHz's as MPH's. The bandwidth then is the amount of traffic passing over the highway in a specific measure of time. More bandwidth = more information flow = better performance.

The width of the address bus controls how much system memory the processor can read or write to. Thinking back to the freeway analogy, the address bus carries information about the different exits, truck stops, and rest areas on the highway. The wider the address bus, the more info the exit signs could contain, and the more exits that could be supported by the freeway. Until recently systems could address far more memory than they would ever use. This is slowly changing however as programs get larger and throughput becomes faster.

Fighting the Memory Bottleneck - Or What's the Cache

The memory bus has become a somewhat limiting factor to overall system performance. While older computers had the processor running at the same speed as the memory bus, the newer ones run the processor at 2, 3 or even more times the speed of the memory. The more often the processor runs faster than the memory, the more often it will have to wait for information from the memory. This is why the system cache is so important. Cache RAM is much faster than the main memory, which means the processor can do more and wait less.