These 3 Optical Networking Stocks Will Make Millionaires By 2030
Data center bottlenecks are shifting. GPUs no longer wait for compute — they wait for data to move. Google's TurboQuant breakthrough just slashed memory overhead by 80% and sped up inference by up to 8×, which means GPUs can now process far more tokens than the network can handle. That makes optical networking the next critical constraint in AI infrastructure, and three specialized companies dominate the layers that solve it. Which one offers the best risk-reward for investors who want exposure to the buildout without betting on a single chip vendor?
Punti chiave
Google's TurboQuant cuts KV cache size by over 80% and speeds up inference by up to 8×, shifting the AI bottleneck from memory to networking.
AI optics are projected to grow from roughly $18 billion in 2025 to around $90 billion by 2030, driven by hyperscale data center expansion.
Coherent's 6-inch indium phosphide wafer process delivers 4× more chips per wafer and cuts die costs by over 60%, providing a significant cost and margin advantage.
All three companies secured multi-year Nvidia partnerships and multi-billion dollar purchase commitments, de-risking future revenue growth.
Customer concentration and supply chain complexity remain key risks, as a handful of cloud providers and telecom companies account for the bulk of demand.
In breve
Optical networking is becoming the next major AI bottleneck as memory and compute constraints ease. Coherent's vertical integration, 6-inch indium phosphide fabs, and multi-billion dollar Nvidia partnership give it the deepest moat and best risk-reward profile among the three optical plays.
The Networking Bottleneck: Why TurboQuant Changed Everything
Google's TurboQuant breakthrough shifts AI constraints from memory to network capacity.
AI data centers face three major bottlenecks: compute, memory, and networking. GPUs used to be the main constraint, but with each new generation — Nvidia's Hopper, Blackwell, and Rubin — compute capacity has grown faster than most models need. That shifted the limit to memory bandwidth and data movement speed, which is why companies like Micron, Broadcom, and Arista have outperformed. But Google DeepMind just released TurboQuant, a data compression method that optimizes how data is stored and reused by GPUs. Every time an AI model generates a token, it stores large volumes of numbers in the KV cache; for models with billions of parameters and long context windows, that cache can consume most of a GPU's memory.
TurboQuant cuts the KV cache size by over 80%, speeds up key inference steps by up to 8×, and can be applied to existing models without retraining. The result: GPUs spend far less time waiting on high-bandwidth memory and can process many more tokens in the same amount of time — as long as the network can keep up. That makes optical networking the next critical bottleneck. Optical networks transmit light through glass fibers, carrying far more data over much longer distances with lower losses than copper. While copper works well inside a server or rack, serious AI clusters need optics to move data between racks, buildings, and even continents. Optical links push 400G, 800G, or even 1.6T of bandwidth per port — hundreds to thousands of times faster than a typical home internet connection.
Three Layers, Three Leaders
Key Financials and Market Projections
Revenue growth ranges from 17% to 65% year-over-year with backlogs exceeding billions.
Co-Packaged Optics: The Next Efficiency Leap
Moving lasers next to the chip cuts power per bit by 30–70%.
Co-Packaged Optics: The Next Efficiency Leap
Co-packaged optics put laser engines in the same package as the switch chip or accelerator, shrinking the electrical distance from centimeters to millimeters. That 10× reduction in distance cuts power per bit by 30% to 70%, improves signal strength, and lets hyperscalers scale to 1.6T or even 3.2T speeds without overheating chips or cooling budgets. Lumentum's ultra-high-power lasers and optical engines are being built specifically for these deployments, positioning the company at the center of the next wave of AI infrastructure efficiency.
Shared Risks and Common Tailwinds
Customer concentration and supply chain complexity offset by explosive AI demand.
The Verdict: Coherent Offers the Deepest Moat
Vertical integration and 6-inch wafer advantage make Coherent the best risk-reward pick.
Lumentum is the most focused play on lasers and optical engines, with 65% year-over-year growth, 42% gross margins, and Nvidia as a strategic partner. It's ideal for investors who want pure exposure to AI optical components without paying for full systems. Sienna sits one layer higher, turning lasers and engines into complete networks that connect AI data centers. Its $7 billion backlog and 33% year-over-year growth make it the best diversified systems-level play for investors who want broad exposure to the optical buildout.
But Coherent stands out for its vertical integration. It owns 6-inch indium phosphide wafer fabs, designs its own laser chips, and ships complete 800G and 1.6T transceivers. That process delivers over 4× more chips per wafer and cuts die costs by more than 60%, giving Coherent a massive cost and margin advantage in a supply-constrained market. Its data center backlog is growing four times faster than shipments, and it has a multi-billion dollar Nvidia investment and purchase agreement. For investors seeking the best ratio of risk to reward, Coherent's deep moat and full-stack approach make it the standout pick.
Titoli menzionati
Persone
Glossario
Avviso: Questo è un riassunto generato dall'IA di un video YouTube a scopo educativo e di riferimento. Non costituisce consulenza in materia di investimenti, finanziaria o legale. Verificare sempre le informazioni con le fonti originali prima di prendere decisioni. TubeReads non è affiliato con il creatore del contenuto.