This article is part of the TechXchange: Silicon Photonics.
What you'll learn:
- How silicon-photonics technology can help address data-center networking demands.
- Other potential applications for high-bandwidth optical interconnect networks.
- The role of silicon photonics in AI and ML.
Silicon photonics has become one of the key building blocks in the data center in recent years. It uses the power of photons to connect switches, servers, and other gear fast and efficiently over long distances.
But silicon photonics is bound to become more important as the demand for bandwidth continues to rise. Or that’s what the experts at imec, one of the world’s leading semiconductor research labs, will tell you.
For imec, it’s only a matter of time before silicon photonics starts taking responsibility for chip-to-chip connectivity between CPUs, GPUs, and other XPUs—where the “X” stands for any computer architecture that best suits the needs of a specific workload—at the heart of data centers and AI supercomputers.
Integrating optics and electronics so closely together is a huge challenge. But the semiconductor industry is rising to meet it. The rise of a new generation of switch chips with co-packaged optics is proof of that.
imec said it sees a path to improving the power efficiency (picojoules per bit), bandwidth density (Tb/s per mm), and cost (Gb/s per dollar) of silicon photonics by an order of magnitude over time. That would open the door to a more advanced form of co-packaged optics called “wafer-level optical interconnects,” which can transfer data between chips on a printed circuit board (PCB) or inside a package at the speed of light.
To learn more about the innovation happening behind the scenes, we sat down with Joris Van Campenhout, the director of imec’s Optical I/O program. This interview has been edited for clarity.
What innovations will be needed in coming years to keep up with the networks in data centers?
Over the last decade, the shift from 100G to 200G to 400G pluggable optics has enabled cloud data centers to support the ever-growing demand for everything from video streaming and web browsing to social networking and cloud computing, among other uses. Continuing this evolution, 800G pluggable modules will soon enter the field, and 1.6T optics are expected by 2026, connecting data-center switches through single-mode fiber cabling over distances spanning hundreds of meters of distance.
The need for speed in data centers is, on the one hand, being driven by the CMOS scaling of data center switches, and on the other hand, by evolving IEEE Ethernet transceiver (TxRx) standards.
Historically, the lowest-cost transceiver solutions have been obtained by minimizing the number of parallel optical channels or lanes. Therefore, the key is the baud rate (or signaling rate) of the optical transceiver, which refers to the number of symbols that can be transmitted per second through a single optical channel.
The first generation of 1.6-Tb/s pluggable optics will be implemented with eight parallel lanes, each carrying 200 Gb/s, with baud rates scaling up to 100 Gbaud (Gbd) with PAM-4. For the subsequent generation running at 400 Gb/s per lane, baud rates will likely scale to 140 Gbd and modulation formats will grow even more complex.
This evolution drives the development of high-bandwidth electro-optical modulators and photodetectors. Insufficient bandwidth in the electro-optical channels degrades the quality of the data transmission. This must be mitigated with digital signal processors (DSPs), implemented on the most advanced CMOS nodes. But these DSPs add considerable power, latency, and cost. And so there is clear demand for optical components with bandwidths of more than 100 GHz that will substantially reduce the need for DSPs.
How is imec contributing to overcoming these challenges for data centers?
For years, imec has advanced the state of the art in silicon photonics and, more recently, we have worked out many silicon- and germanium-based electro-optical components at bandwidths of up to 50 GHz, enabling 200-Gb/s PAM-4 transmission in the first-generation of 1.6-Tb/s pluggable optics.
We are also overcoming a host of issues that prevented us from integrating lasers, waveguides, and other optical building blocks. We realized the integration of high-quality low-pressure chemical vapor deposition (LPCVD)-based SiN waveguides in imec’s SiPho platform—which was a challenge before due to the high temperatures required to create them. The heat can negatively impact the performance of Ge- and Si-based active electro-optic devices. It complements our toolbox with high-precision passive optical components.
Moreover, we developed SiPho interfaces for flip-chip-based laser integration, which will help reduce the overall transceiver cost. Lasers used in silicon photonics are not natively available in a CMOS production process. Using this process, we recently demoed indium-phosphide (InP) distributed-feedback (DFB) laser diodes bonded onto silicon-photonics wafers with an alignment precision within 300 nm and coupling loss less than 2 dB.
For lane rates of 400 Gb/s, the SiPho toolbox needs a major upgrade. Especially on the transmission side, modulators with bandwidths of more than 100 GHz and sufficiently low optical losses are very hard to achieve in Si or Ge, which necessitates the introduction of “exotic” electro-optic materials.
We are carrying out early integration pathfinding efforts aimed at the hybrid or heterogeneous integration of such non-CMOS materials in the SiPho process, exploring techniques such as micro-transfer printing.
In what other domains will we see high-bandwidth optical interconnect networks emerge?
The recent arrival of large generative AI models such as ChatGPT and Lambda has put a spotlight on the importance of networking bandwidth when using high-performance AI/ML clusters used to train such models. Pluggable optical interconnects have already started to replace copper (Cu) interconnects within such computer clusters, easing the bottlenecks between multi-XPU servers across distances of a few to tens of meters.
But as these clusters scale to thousands of XPUs, optical interconnects will gradually move into the board and package to meet high chip-to-chip interconnect bandwidth requirements. This evolution puts even more pressure on metrics like bandwidth density, cost, power, and reliability. Optical transceivers with more than a Tb/s per mm of bandwidth density, power consumption of less than 5 pJ/bit, link latency of less than 100 ns, and high reliability will be needed—all at a cost of 10 cents per Gb/s or less.
Unlike standard data-center networks, the tight power and latency budgets of AI/ML clusters leave no or very little room for digital signal processing. This means that the optical channels need to be very “clean,” with extremely low bit error rates (BERs).
In the context of AI and ML, how are you trying to address these problems with silicon photonics?
For these systems, a more appropriate way to scale bandwidth in these systems is by using a larger number of parallel optical channels, each running error-free at data rates in the range of 16 to 64 Gb/s, using optical and electrical components optimized for efficiency rather than bandwidth. In turn, aggressive wavelength division multiplexing (WDM) with 8, 16, or more wavelengths helps keep the total number of physical optical channels in check.
To connect optical I/O modules with the XPU or high-bandwidth memory (HBM), wafer-level co-packaged optics are emerging (see figure). For this technology, we want to leverage as much as possible emerging, advanced wafer-level 2.5D and 3D packaging technologies such as “micro” bumps, through-silicon vias (TSVs), silicon bridges, and eventually hybrid bonding.
Then, we leverage power-efficient electrical wide-I/O electrical interfaces for the “last-mile” copper interconnects. Finally, cost and optical link budgets are increasingly driving for integration of the light sources onto the SiPho chips.
For both cloud data centers and AI/ML clusters, networking bandwidth has become an increasingly important metric to determine system-level performance. The networking bottleneck is creating a serious sense of urgency, translating into strong engagements for silicon-photonics adoption by the industry.
Can you discuss any recent research highlights in the AI/ML context?
Recent research is focused on optimizing silicon ring modulators, WDM filters, and circuits for enabling low-power hybrid CMOS-SiPho transceivers. Ring-based WDM SiPho transceiver architectures have the potential to realize the performance metrics required for AI/ML clusters.
imec recently debuted a ring-based CMOS-SiPho transceiver with optical energy consumption as low as 3.5 pJ/bit and less than 10-12 BER, effectively “error-free.” The solution is scalable to 800 Gb/s per mm of bandwidth density, using eight wavelengths.
The move toward SiPho transceivers with a growing number of optical lanes comes with a higher risk of yield loss. This should be avoided as much as possible from a cost perspective. To measure and subsequently improve yield, we’re working on wafer-scale solutions for known-good die (KGD) testing of the SiPho transceivers.
Where are you taking silicon photonics next? What do you expect to achieve in the long term?
Wafer-level co-packaged optics combined with aggressive WDM is considered a viable approach to achieve 2- to 4-Tb/s/mm fiber-coupled optical links for AI and ML systems, with power consumption down to 1 to 2 pJ/bit. But by the end of the decade, we anticipate the need for optical interconnects approaching 10 Tb/s per mm of bandwidth density at power consumption well below 1 pJ per bit.
We recently started exploring wafer-level optical interconnects—our long-term vision for ultra-high-bandwidth chip-to-chip optical I/O. This deeply integrated optical I/O technology will ultimately enable massively parallel, optical XPU-to-XPU connectivity, promising tremendous gains in system performance.
There are other challenges that we’re trying to solve along the way. For instance, a key challenge is to deploy those scaled electrical interconnects and related process flows while enabling the optical fiber coupling interfaces. Today, fibers are "glued" to SiPho chips after actively aligning them to ensure good coupling efficiencies. In future implementations, it’s highly desirable to have "pluggable" fiber connectors, where the fibers are plugged into the package or the chip itself at the very last step.
imec has extensive in-house expertise. How does it positively impact your optics research?
Since the start of the Optical I/O program in 2010, imec has acquired a wealth of know-how when it comes to developing integrated silicon photonics, leveraging decades of CMOS integration knowledge.
Furthermore, advanced forms of co-packaged optics require cutting-edge 3D integration technologies, for which we can rely on the years of experience gained within imec’s 3D system integration program. In the same vein, our long-term vision to plug optical interconnects into wafer-level packaging needs to be supported by a system-level analysis, for which we are collaborating with imec’s system-technology co-optimization (STCO) team.
The unique availability of such diverse competencies under one roof—along with the strong presence of our partner ecosystem—helps foster the development of highly complex systems and technologies.
This article is part of the TechXchange: Silicon Photonics.