A3Cube's new RONNIEE Express (Fig. 1) system is designed to put a tremendous amount of memory in the hands of programmers. It utilizes PCI Express to link a cluster's memory together into a single environment. RONNIEE is designed to be highly scalable to handle very large clusters while incurring minimal overhead. Essentially it is a 64-bit shared address space.
From a CPU perspective, the A3Cube interface appears like a conventional PCI Express memory interface. In fact, the initial RONNIEE RIO (Fig. 2) and RONNIEE 2S are PCI Express adapter cards. These can plug into any PCI Express system including Mac and PC systems or rack mount hosts.
The difference between a RONNIEE adapter and a conventional PCI Express adapter is that a normal PCI Express system logical appears to the CPU as a tree with the CPU being the host at the top. As noted in Figure 1, this is not the case with A3Cube's approach that employs non-transparent bridging on a massive scale. The system also supports remote I/O.
Related Articles
- Server CPU Targets In-Memory Analytics
- Packaged Linux Delivers Network Functions Virtualization
- Essentials Of The Hadoop Open Source Project
The system essentially exposes a CPU's memory to the rest of the cluster. Remote memory is accessed using the usual PCI Express transaction. The only difference is that the system routes the transaction through the system similar to other high performance computing (HPC) fabrics. In this case, there are two possible interconnect scenarios.
The first approach uses a more conventional adapter/switch configuration. A RONNIEE 2S adapter would be in each host and each is connected to a ROMNIEE 3 switch. Switches can be stacked for larger networks. The RONNIEE 2S has four ports and can be used in small clusters without a switch.
The second approach uses the RONNIEE RIO that provides six individual links to a host's nearest neighbors in a 3D torus. Like the switch, these adapters know how to forward transactions through the fabric. This allows scaling without the need for switches but with the added latency for each hop.
Both approaches are designed for redundant connections since many applications will require more robust high availability (HA) support. HA fencing support is provided by the hardware. Each link can be monitored. The system can be partitioned as well so a large cluster can be broken up into smaller clusters using hardware barriers. A host can also be linked to a number of different clusters providing a control communication mechanism.
A3Cube provides low level access but many will utilize the TCP sockets support. These drivers are available for most operating systems such as Windows and Linux.
A3Cube's approach pushes the limits of PCI Express but greatly expands the amount of memory and directly accessible peripherals. It can support a wide range of scenarios from aggregating banks of SSDs to accelerating Hadoop (see “Essentials Of The Hadoop Open Source Project”).