History of Computer Memory

DLM

This post is dedicated to the history of computer memory (RAM). Personally, I find this one of the most interesting elements of the computer and one which has undergone the most transformation from its origins to today. In my first post [what is a computer] I noted that it is precisely  memory that separates computers from other similar machines. Also, at least for me it has been the component that I’ve struggled with the most – it limits how you can approach certain tasks, which in turn can shape your projects and also heavily affects your daily computer use habits by influencing how many apps or browser tabs can be kept open simultaneously. 

Delay line memory

One of the earliest forms of computer memory is delay-line memory. In its earliest form, it was an elongated tub semi-filled with mercury with piezoelectric crystals at either end. See figure below.

Unlike other forms of memory, this was used not for storing individual bits (individual ones or zeros) but sequences of binary digits or words (similar to modern bytes). When the information was written in memory, an electrical signal would be transmitted to a piezoelectric crystal. Piezoelectricity is property of certain materials which means that a. they deform when electricity is applied to them and b. they emit electricity when they are deformed. Acoustic guitars have piezoelectric pickups allowing them to be connected to speakers and most pocket lighters have piezoelectric spark generators. These piezoelectric crystals would convert electric signal to vibrations which would be transmitted via the mercury tube to a piezoelectric receiver at the other end which would convert them back to electricity. Such recovered signal can be read and used by the computer. 

Delay line memory from EDVAC computer, capable of storing 560 bits of data (source: Flickr )

As one might guess, this design had numerous problems. Multiple digital->analog->digital conversions degraded the signal which could cause bits to flip. Second, this implementation of memory was large, cumbersome, expensive and slow. Also, mercury is toxic to humans, which meant that operating computers with such memory constituted a health hazard. 

At this point you can ask, why this was used in the first place? If it was unreliable, expensive and toxic, why work with this in the first place? The answer to that is simple – to cut down costs and simplify the development of the early computers engineers sought to use the existing parts wherever possible. These delay-line tubes were often used during WWII to process the radar data – by mixing the “fresh” signal from the radar with the delayed signal from the tube, it was possible to cancel out the interference from the immobile elements such as buildings or terrain in the radar. Since the engineers who constructed the first computers after the war were the ones working on the radar installations during the war, this made perfect sense. Given their experience and familiarity with the technology, they determined that the delay line tubes could be a quick way to implement memory in the early computers and simply went for it. However, as the field and the technology matured, this implementation of memory was abandoned in favor of better technology. 

Cathode Ray Tube

TV became mainstream in the 1950s. It was this unlikely source that inspired the next generation of computer memory. Back in the day TVs were not flat, they were big clunky boxes where a picture was projected on the screen via a special diode called cathode ray tube (CRT). In this tube cathode would emit a beam of electrons which would hit an phosphorescent screen-like anode. Oscillating magnetic fields would direct the electron beam so it would traverse the whole screen to form an image. If we took the same tube and added a photosensitive read-plate on the end, we would have a device that could function as a computer memory. 

This memory was a tremendous improvement on earlier designs. It was much cheaper and compact. Not to mention, it did not contain a ton of hazardous material. Furthermore, it provided another benefit – using a secondary tube its data could be projected outwards. In other words, the contents of memory could be viewed by the computer operators and programmers. This provided a visual queue if the program was executing and some highly skilled programmers could even use this data for debugging purposes.  

Output of CRT memory; source Manchester University

Magnetic core memory

CRT memory had relatively low density (i.e. required a lot of physical space) also it was fragile and could break easily. Something was needed that could fit more storage in a smaller space and also be a bit more rugged and durable. 

This need was answered using a remarkably simple solution – toroidal (doughnut shaped) magnets. Such magnets have magnetic field the polarity of which can be reversed by running a current through the center of the doughnut. 

Now if we ran two wires through the donough, let’s call them ‘Read/Write’ and ‘Sense’ we could use this property to store data. Running an impulse through the “Read/Write” wire would ‘set’ the polarity of the magnetic doughnut depending on the direction of the current. That would be analogous to ‘writing’ or ‘setting’ a state. If a current is ran through the ‘Read/write’ wire again, it would induce a current in the ‘Sense’ wire. The strength of this current would depend on the polarity of the doughnut’s magnetic field or the ‘state’. In this way, magnetic donuts could be set to a desired state, which later could be read. 

Multiple doughnuts could be set on a grid and instead of using a single read-write wire, we could use two on the X and Y axis. In such a way, we could run a weaker current through each wire which would only read/write the data on the doughnut where the two wires intersect. By doing this, individual bits of memory could be addressed and read easily. 

This design had a small problem – the reads were destructive, i.e. the very act of reading the data destroyed it. However, this could easily be overcome by immediately following the read with a write. 

This design was an enormous success and most of the mainframe computers in the 1960s utilised such memory. In fact, this approach was so robust and reliable that it even was chosen as the memory implementation for the Apollo Guidance Computer and was used in all Apollo missions, including the moon landings. 

The problem with this memory was that despite numerous attempts to automate the process, this memory mostly had to be assembled and wired by hand. This means that to have a 1 KB of memory 8000 such doughnuts had to be connected and wired by hand. Furthermore, though this design was a dramatic improvement in terms of storage density compared to the previous designs, the computers were evolving and shrinking. The main factor contributing to this trend was that solid state transistors were replacing vacuum tubes, which allowed computers to become smaller and more tightly integrated. Naturally, this innovation affected computer memory as well. 

Solid state semiconductor memory

The figure below shows the S-R latch – a simple circuit utilising NAND (NOT AND) gates which can be set to a certain state and hold it. In the figure high input to the S (Set) would cause the Q state to activate and it would stay active even if the input to S is gone. Similarly high input to R (reset) would cause the state to transition to Q` and remain in that state even then input to R is gone. 

People have known that the logic gates can be connected in a way that would hold state since about 1920s and they have been used to some extent in that capacity in the British Colossus computers during WWII. However, implementing computer memory in such a way was simply not practical until the advent of solid state transistors and integrated circuits. After all, holding a single bit would have required two vacuum tubes, which take up significantly more space than a single magnetic doughnut.    

However, it all changes with the adoption of the integrated circuits (ICs). Since ICs allow to fit hundreds of logic gates onto a single chip, this allows to utilise latches similar to the above  to achieve unparalleled memory densities. This technology in a very similar form is in use to this day and powers whatever device you are using now. The main improvements in computer memory which occurred since late 1970s when this technology was adopted came primarily from the improvements in IC chip design and efficiency. 

What is next?

Having overviewed the evolution of the computer memory technology from some of the earliest instances to pretty much today, it is natural to ask “what’s next?”. As can be expected, we do not know for sure, but we can speculate. For me one of the most exciting possibilities is that computer storage will achieve nearly the same speed as computer memory, making the two nearly indistinguishable. This is an exciting thought, as it would allow to relatively easily expand the memory of an average computer into a terabyte range, which would have significant implications to all spheres of commuting from databases to gaming. 

As I write this, the dominant RAM technology is DDR 4, with DDR 5 announced to appear later this year. Currently, the fastest SSDs on the market can exceed the speeds of DDR3 – previous generation RAM, which is still widely used in older machines. A clever use of SSDs have already sped up newest gaming consoles so one can hope the same will happen for computers as well. 


This and other posts in this series are only possible due to extensive reading and desk research I’ve carried out. Check out the Resources section to see some of the most interesting books and documentaries I’ve identified along the way.

Leave a Comment

Your email address will not be published. Required fields are marked *