What’s the Difference Between All Those Emerging Memory Technologies? (.PDF Download)
Nothing has stayed “new” as long as emerging memory technologies. Whether MRAM, PCM, ReRAM, or FRAM (or the many other names these technologies go by), these names have been bandied about as the “Next Big Thing” for decades, yet they never have hit the mainstream.
Let’s have a look at the leading ones, learn why they are considered necessary, and discover why they have taken as long as they have to become mainstream.
Why They’re Necessary
Chip costs are determined by two factors:
- The cost of manufacturing a silicon wafer.
- The number of chips that can be produced on that wafer.
Semiconductor manufacturers have historically used process technology shrinks to increase the number of chips that can be produced on that wafer and drive down the costs out of their chips, migrating from a 35nm process to 25nm, then 20, and so on.
As a general rule the cost to process a silicon wafer is relatively constant, so the cost of a chip tends to decline in proportion to the process technology that is used to manufacture it (Fig. 1). As the process technology shrinks (across the bottom axis of the chart), the cost of the chip should decrease in proportion (the vertical axis).
1. The relative cost of a chip is proportional to its process geometry. (Source: Objective Analysis)
Memory manufacturers believe that there is a limit to how small a flash or DRAM process can be shrunk. This is called the “scaling limit,” and is determined by the number of electrons that can be stored on a flash gate or DRAM capacitor, also called a “memory cell.” As the process technology shrinks, the memory cell gets smaller and the number of electrons the cell can store declines to approach a lower limit of what can be accurately measured. Eventually the number of electrons on the memory cell will shrink to the point that it becomes extraordinarily difficult to determine whether or not there are actually any electrons on the cell at all.