A Summary Of The DDR Memory Controller Standard—Wait, There Isn’t One!
The number of systems-on-a-chip (SoCs) that require an interface to off-chip memory is increasing. As a result, more and more designers are turning to double-data-rate (DDR) SDRAM interfaces such as DDR, DDR2, and DDR3 to address their low-cost, security of supply, storage capacity, and performance requirements. Fortunately for those designers, DRAMs have been standardized since the 1970s.
But this still leaves a challenge that most SoC engineers don’t recognize until things start to go wrong. DRAMs are standardized for computer systems such as PCs using dual-inline memory modules (DIMMs) and aren’t designed with embedded applications in mind. In fact, there is no standard for a DRAM controller or a DRAM channel. Only the DRAM itself is standardized.
Although the Internet provides unprecedented amounts of public information on DRAMs, little guidance is available to help engineers decode how to build a memory controller into their SoC. Many embedded systems, such as those found in digital TVs, personal video recorders, printers, and other consumer electronics, typically benefit from a different system configuration than one would derive from reading DRAM data sheets or most of the public information about DRAMs.
Designing an embedded DRAM memory controller involves several critical design decisions that are best addressed early in the process. Starting with performance (measured in megabits per second, or Mbits/s, per pin), the theoretical maximum is unachievable in reality. This makes the memory controller’s algorithms for prioritizing traffic, re-ordering commands, and hiding pre-charge/bank activate commands in otherwise unused command slots critical to understanding that memory channel utilization is maximized. A properly optimized system may then possibly benefit from lower channel bandwidth, which lowers the chip cost.
Associating fewer pins with a narrower channel obviously benefits the SoC cost. But the memory channel performance target has further-reaching effects, as it can impact critical package decisions like using flip-chip versus wire-bond technology. In addition, the power-to-signal ratios for the DRAM interface must be optimized to properly account for signal-integrity effects, such as synchronously switching outputs (SSO), which play havoc with the DRAM channel signaling if they aren’t thoroughly analyzed in advance.
Power issues associated with embedded DRAM controllers are even more obscure. The bulk of an embedded DRAM controller’s power is consumed by both the output drivers, as they switch the load capacitance and drive the terminated transmission lines, and the on-die termination (ODT) resistors that terminate the read operations. The well-standardized output driver and ODT impedance of the DRAMs has very little relation to that which should be employed by the physical interface (PHY) of the SoC DRAM controller.
For example, in a system using one dual-rank DDR2 DIMM, the memory controller can have up to 18 loads on the address and command pins compared to two for each data pin. Such systems will need to be conscious of the on-DIMM series termination for all of the data lines and data strobes. Systems that only require a few DRAM components on the motherboard do not need series termination and can use a larger, lower-power output drive impedance.
Embedded DRAM interfaces also require in-system calibration or data training at system power-up to properly control the DRAM signaling. Typically, the higher the speed, the more complex and involved this training can become. In fact, until DDR3, the DRAM offered no mode to facilitate this training.
The final issue often overlooked by SoC designers is the bill-of-materials cost to their customers related to the DRAMs themselves. The PC market drives pricing for DRAMs, favoring large-capacity, 8-bit wide DRAMs. But because embedded systems typically require smaller DRAMs with a 16-bit wide interface, DDR2 currently offers and will continue to provide a less expensive solution for quite some time, even after the much-hyped “price per bit crossover” against DDR3. To best accommodate all possible future pricing scenarios, designers should seek a DDR2 controller solution that can easily migrate to DDR3 once the price crossover takes place for the actual devices used in the embedded application.