Skip to main content
Infrastructure 2 min read 553 views

OFC 2026: Coherent and Broadcom Demonstrate 3.2 Terabit-Per-Second Optical Transceivers

At the Optical Fiber Communication Conference in Los Angeles, Coherent and Broadcom have demonstrated 3.2 Tbps optical transceiver modules — doubling the bandwidth of current-generation 1.6T interconnects. The technology is designed for the next wave of AI data center buildouts, where single training runs require moving exabytes of data between thousands of GPUs.

TD

TechDrop Editorial

Share:

The Optical Fiber Communication Conference in Los Angeles has become the epicenter of AI infrastructure innovation, with Coherent and Broadcom demonstrating 3.2 terabit-per-second optical transceiver modules that double the bandwidth of current-generation 1.6T interconnects. The technology arrives as AI data center operators face a fundamental bottleneck: training runs for frontier models now require moving exabytes of data between thousands of GPUs, and current interconnect technology is struggling to keep pace.

The Bandwidth Wall

Modern AI training clusters connect thousands of GPUs into a single logical system, and the speed of the interconnects between those GPUs directly limits training performance. When GPUs spend time waiting for data from other GPUs, they sit idle — wasting expensive compute capacity. Current 1.6T transceivers, which became commercially available in late 2025, are already approaching their limits for the largest training clusters being planned for 2027 and beyond.

The 3.2T demonstration by Coherent uses 200G-per-lane PAM4 signaling across 16 lanes, while Broadcom's approach uses 100G-per-lane coherent technology across 32 lanes. Both achieve the same aggregate bandwidth but with different tradeoffs in power consumption, reach, and cost per bit.

AI Data Center Impact

For AI data center operators, 3.2T transceivers represent more than a simple bandwidth upgrade. The doubling of per-port bandwidth means that the same physical infrastructure — the same fiber cables, the same rack layouts, the same cooling systems — can support twice the inter-GPU communication capacity. This is critical because physical infrastructure is the slowest and most expensive component of data center buildouts, often requiring 18 to 24 months of construction.

The technology also enables new cluster architectures that were previously impractical. With 3.2T interconnects, a single rack of GPUs can maintain full-bandwidth connectivity to more remote racks, allowing larger "all-to-all" communication patterns that improve training efficiency for models with complex parallel decomposition strategies.

Timeline to Production

Both Coherent and Broadcom indicated that 3.2T transceivers will enter volume production in the second half of 2027, with initial customer evaluations beginning in late 2026. The timeline aligns with the deployment schedule for NVIDIA's Vera Rubin platform and AMD's next-generation AI accelerators, both of which are designed to take advantage of higher-bandwidth interconnects.

OFC 2026 runs through March 19 at the Los Angeles Convention Center, with additional demonstrations expected from Intel, Marvell, and Cisco throughout the week.

Related Articles

Infrastructure 2 min read

Ayar Labs and Wiwynn Unveil 1,024-GPU Photonic Rack System at OFC 2026

Silicon photonics startup Ayar Labs and server manufacturer Wiwynn have unveiled a rack-scale reference platform at the Optical Fiber Communication Conference that connects over 1,024 GPUs using optical interconnects instead of copper. The 100% liquid-cooled system promises dramatically lower power consumption while enabling the massive GPU clusters needed for next-generation AI training.