SFI Will Challenge, Reward Computer Designers

Nov. 10, 2003
While switch-fabric interconnect (SFI) technology is common in communications applications, its recent migration to the computing environment is gaining a lot of attention. The design considerations in this more challenging arena are dramatically...

While switch-fabric interconnect (SFI) technology is common in communications applications, its recent migration to the computing environment is gaining a lot of attention. The design considerations in this more challenging arena are dramatically different, involving latency, topology, aggregate throughput, and other performance-related issues.

The change from a multipoint or parallel backplane to the emerging SFI solution creates some major challenges for computer designers. Unlike in communications, where it was simply a matter of transitioning the external topology to the backplane, computer designers must seriously consider the application of SFI technology in the backplane itself, because many normal SFI attributes are not easily adaptable to this application.

SFI typically uses the packet concept, which works well in communications applications: sending information between systems, not just across a backplane. This proved very effective for sending data over a single line, but of lesser value in the backplane realm, where the cost efficiencies of one line versus two aren't that significant.

For a single CPU computer, where everything within the system is coordinated and directed through the processor, the advantages of multiple channels are minimal. Once the single-processor computer has separate microcontrollers embedded into each function card to control its own operations, the multiple independent channels of the SFI will be extremely attractive. Right now, the single-channel bandwidth of a cost-effective switch-fabric solution doesn't compete with a multipoint solution.

In the multiprocessor server environment, the transmission and setup latency encountered in today's SFI implementations are quite significant. The transmission latency in a multipoint environment, for example, includes the propagation delay in the two endpoint transceivers and in the length of the trace between the two. In the case of an SFI, several other factors come into play: the latency of the trace to the switch, serialization at the starting point, de-serialization at the switch, re-serialization at the switch, delay through the switch, the trace from the switch, and de-serialization at the final endpoint. Also, at one or two points, elastic buffer delays are needed to adjust between clock boundaries. All of this adds up to a lot of potential transmission delay.

Data pipelining, another issue for the computing environment, may become problematic due to setup latency issues. Typically, only one packet at a time may be transferred, unless the cross-point switch contains significant memory. While a packet is in transit, the fact that nothing can be done to set up the next packet transfer causes potential gaps in the data stream. Setup latency may be compounded by the possibility of sending a packet to an output port that's already busy with another request. In that case, unless there is a store and forward buffer, the packet is rejected and recovery techniques must be implemented to resend the packet. One way to solve many of these latency problems is to use a more expensive mesh backplane, though cost may become an issue.

Unlike in computers, where latency and nonpipelined data cause the processor to waste performance on wait states, latency is better tolerated in communications applications, where the transmission of data packets operates quite satisfactorily.

In the face of all these issues, how can SFI technology be effectively applied to the computing environment? Here is a possibility worth considering: Break out the data, using a separate set of lines to send the address and control commands.

One primary advantage of this is the reduction of both transmission and setup latency to almost nothing. Also, multiple requests could be sent to the switch for pre-arbitration as the previous data stream is sent. Multiple requests would eliminate a great deal of the setup latency and increase the probability of addressing an open output port. The design could also use the address/control lines for messages, doorbelling, and short broadcasts, freeing the datapath for data. Finally, you could prioritize specific ports (i.e., microprocessors over cache) to preempt requests from lower-priority ports.

Other functions not exclusively requiring this separation could also be implemented to improve the overall operating environment for computing system applications.

Sponsored Recommendations

Highly Integrated 20A Digital Power Module for High Current Applications

March 20, 2024
Renesas latest power module delivers the highest efficiency (up to 94% peak) and fast time-to-market solution in an extremely small footprint. The RRM12120 is ideal for space...

Empowering Innovation: Your Power Partner for Tomorrow's Challenges

March 20, 2024
Discover how innovation, quality, and reliability are embedded into every aspect of Renesas' power products.

Article: Meeting the challenges of power conversion in e-bikes

March 18, 2024
Managing electrical noise in a compact and lightweight vehicle is a perpetual obstacle

Power modules provide high-efficiency conversion between 400V and 800V systems for electric vehicles

March 18, 2024
Porsche, Hyundai and GMC all are converting 400 – 800V today in very different ways. Learn more about how power modules stack up to these discrete designs.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!