Magnetic Ferrite-Ring Core Memory
When I was a lad back in 1975, I worked as a student engineer testing military aircraft computer memory modules. Looking back now, they probably represented the ultimate development of Ferromagnetic ring core technology. Each module measured about 18 x 10 x 6cm, and contained nearly 600,000 ferrite ring cores providing 32K x 18bit words of Non-Volatile Random-Access Memory or NVRAM. Each core was about 0.033mm in diameter and three wires had to be fed through the hole in its centre. No machine had the precision to weave this metallic fabric, so people with very steady hands and incredible eyesight were employed to make it by hand! That and the full military specification accounted for the price tag of £25,000 each – in 1975 money. Core memory exploits the Hysteresis property of ferromagnetic materials whereby they remain magnetised after the magnetising force is removed. If the core is magnetised in one direction, this could represent a logic 1. Reverse the force and the core is flipped to the opposite polarity – a logic 0. Unfortunately, the method used to read the data destroys it, so a read cycle has to be followed by a write-back of the original state.
Early Semiconductor Memory
In 1975 there were semiconductor memory chips available: Intel 2102 1K x 1bit Static RAMs and the 1103 Dynamic RAM devices of the same capacity. They were being used in commercial computers, but were not considered reliable enough for the military environment. Being ‘volatile’, that is losing data when their power supply was switched off, didn’t help either.
Generally, computers need two sorts of memory: program memory (usually volatile) from which instruction code can be read as fast as the processor needs it, and slower, but higher capacity non-volatile storage containing program code to be loaded into the program memory.
Current Technology for General Purpose (Desktop/Laptop) Computers
PC designers have always used one form or another of Dynamic RAM for program memory. Once the technology had advanced to the point where a single cell or bit had been reduced to a single transistor plus a capacitor, nothing else could compete in terms of density and access speed. Over the years, a ‘scaling’ process took place whereby the physical size of the cell was reduced allowing more and more memory cells to be crammed onto a silicon die. The reducing size has led to increasing speed, but the need for constant ‘refreshing’ every few milliseconds means that DRAM has always been power-hungry. Refreshing is necessary because the charge on the capacitor, which represents the logic state, gradually leaks away. Static RAM which doesn’t need refreshing, uses less power and is faster, seems like a better choice. The trouble is, because it needs up to six transistors per cell it’s not possible to get anywhere near the same number of bits per chip as Dynamic RAM. Both Dynamic and Static RAM are volatile and need to work with non-volatile storage using magnetic tape, floppy disk or hard disk-based technologies. Non-volatile RAM devices have been under development since the 1980’s but until now they have been unable to keep up with the scaling of hard-disks and Dynamic RAM, both in capacity and speed.
Current Technology for Embedded Computing
Embedded processors really need non-volatile program memory because for most projects, there is no room for a hard disk or an optical drive. In most cases a pre-programmed Read-Only memory (ROM) would suffice. A characteristic of embedded processors is that they (usually) only run a single application, unlike a general-purpose machine such as a PC. The problem is that firmware development will often require many program-debug-edit-reprogram cycles and this is going to be very expensive if the memory chip is thrown away every time a bug is found.