TubeReads

These 3 Optical Networking Stocks Will Make Millionaires By 2030

Data center bottlenecks are shifting. GPUs no longer wait for compute — they wait for data to move. Google's TurboQuant breakthrough just slashed memory overhead by 80% and sped up inference by up to 8×, which means GPUs can now process far more tokens than the network can handle. That makes optical networking the next critical constraint in AI infrastructure, and three specialized companies dominate the layers that solve it. Which one offers the best risk-reward for investors who want exposure to the buildout without betting on a single chip vendor?

Ticker Symbol: YOUInvesting2 Erwähnte Personen4 Glossar
Videolänge: 19:19·Veröffentlicht 29. März 2026·Videosprache: en-US
5–6 Min. Lesezeit·3,196 gesprochene Wörterzusammengefasst auf 1,117 Wörter (3x)·

1

Kernaussagen

1

Google's TurboQuant cuts KV cache size by over 80% and speeds up inference by up to 8×, shifting the AI bottleneck from memory to networking.

2

AI optics are projected to grow from roughly $18 billion in 2025 to around $90 billion by 2030, driven by hyperscale data center expansion.

3

Coherent's 6-inch indium phosphide wafer process delivers 4× more chips per wafer and cuts die costs by over 60%, providing a significant cost and margin advantage.

4

All three companies secured multi-year Nvidia partnerships and multi-billion dollar purchase commitments, de-risking future revenue growth.

5

Customer concentration and supply chain complexity remain key risks, as a handful of cloud providers and telecom companies account for the bulk of demand.

Kurzgesagt

Optical networking is becoming the next major AI bottleneck as memory and compute constraints ease. Coherent's vertical integration, 6-inch indium phosphide fabs, and multi-billion dollar Nvidia partnership give it the deepest moat and best risk-reward profile among the three optical plays.


2

The Networking Bottleneck: Why TurboQuant Changed Everything

Google's TurboQuant breakthrough shifts AI constraints from memory to network capacity.

AI data centers face three major bottlenecks: compute, memory, and networking. GPUs used to be the main constraint, but with each new generation — Nvidia's Hopper, Blackwell, and Rubin — compute capacity has grown faster than most models need. That shifted the limit to memory bandwidth and data movement speed, which is why companies like Micron, Broadcom, and Arista have outperformed. But Google DeepMind just released TurboQuant, a data compression method that optimizes how data is stored and reused by GPUs. Every time an AI model generates a token, it stores large volumes of numbers in the KV cache; for models with billions of parameters and long context windows, that cache can consume most of a GPU's memory.

TurboQuant cuts the KV cache size by over 80%, speeds up key inference steps by up to 8×, and can be applied to existing models without retraining. The result: GPUs spend far less time waiting on high-bandwidth memory and can process many more tokens in the same amount of time — as long as the network can keep up. That makes optical networking the next critical bottleneck. Optical networks transmit light through glass fibers, carrying far more data over much longer distances with lower losses than copper. While copper works well inside a server or rack, serious AI clusters need optics to move data between racks, buildings, and even continents. Optical links push 400G, 800G, or even 1.6T of bandwidth per port — hundreds to thousands of times faster than a typical home internet connection.


3

Three Layers, Three Leaders

💡
Lumentum (LIT)
Leading supplier of lasers and optical switches for AI data centers. Revenue hit $665 million last quarter, up 65% year-over-year, with a $400 million backlog for optical circuit switches and a multi-year Nvidia agreement including $2 billion investment.
🔗
Coherent (CHR)
Vertically integrated from laser chips to complete 800G and 1.6T transceivers. Generated $1.7 billion in revenue last quarter, up 17% year-over-year, with 6-inch indium phosphide fabs that cut die costs by over 60%.
🌐
Sienna (CIN)
Builds long-haul optical networks connecting multiple data centers. Posted record $1.4 billion revenue, up 33% year-over-year, with a $7 billion backlog and cloud provider revenue up 76% year-over-year.

4

Key Financials and Market Projections

Revenue growth ranges from 17% to 65% year-over-year with backlogs exceeding billions.

AI Optics Market Size (2025)
~$18 billion
Projected to grow to around $90 billion by 2030
Lumentum Quarterly Revenue
$665 million
Up 65% year-over-year; guiding to $850 million next quarter (21% QoQ growth)
Lumentum Non-GAAP Operating Margin
25.2%
Up from 8.2% a year ago
Coherent Quarterly Revenue
$1.7 billion
Up 17% year-over-year; data center segment up 34% year-over-year
Sienna Quarterly Revenue
$1.4 billion
Up 33% year-over-year with a $7 billion backlog
Nvidia Investment (Lumentum & Coherent)
$2 billion each
Multi-billion dollar purchase commitments for lasers, optical engines, and transceivers

5

Co-Packaged Optics: The Next Efficiency Leap

Moving lasers next to the chip cuts power per bit by 30–70%.

💡

Co-Packaged Optics: The Next Efficiency Leap

Co-packaged optics put laser engines in the same package as the switch chip or accelerator, shrinking the electrical distance from centimeters to millimeters. That 10× reduction in distance cuts power per bit by 30% to 70%, improves signal strength, and lets hyperscalers scale to 1.6T or even 3.2T speeds without overheating chips or cooling budgets. Lumentum's ultra-high-power lasers and optical engines are being built specifically for these deployments, positioning the company at the center of the next wave of AI infrastructure efficiency.


6

Shared Risks and Common Tailwinds

Customer concentration and supply chain complexity offset by explosive AI demand.

RISKS
Customer Concentration and Supply Chain Exposure
A handful of cloud providers and telecom companies account for the bulk of demand, so any pause in spending hits all three companies at once. They also sit in the middle of complex global supply chains for wafers, lasers, and advanced packaging, making them vulnerable to disruptions, export controls, and tariffs.
TAILWINDS
5× Market Growth and Nvidia Partnerships
AI optics are projected to jump from roughly $18 billion in 2025 to around $90 billion by 2030. Every time Nvidia and Google solve a compute or memory bottleneck, they put more pressure on the network. All three companies have multi-year Nvidia partnerships and multi-billion dollar purchase commitments, de-risking their revenue ramps.

7

The Verdict: Coherent Offers the Deepest Moat

Vertical integration and 6-inch wafer advantage make Coherent the best risk-reward pick.

Lumentum is the most focused play on lasers and optical engines, with 65% year-over-year growth, 42% gross margins, and Nvidia as a strategic partner. It's ideal for investors who want pure exposure to AI optical components without paying for full systems. Sienna sits one layer higher, turning lasers and engines into complete networks that connect AI data centers. Its $7 billion backlog and 33% year-over-year growth make it the best diversified systems-level play for investors who want broad exposure to the optical buildout.

But Coherent stands out for its vertical integration. It owns 6-inch indium phosphide wafer fabs, designs its own laser chips, and ships complete 800G and 1.6T transceivers. That process delivers over 4× more chips per wafer and cuts die costs by more than 60%, giving Coherent a massive cost and margin advantage in a supply-constrained market. Its data center backlog is growing four times faster than shipments, and it has a multi-billion dollar Nvidia investment and purchase agreement. For investors seeking the best ratio of risk to reward, Coherent's deep moat and full-stack approach make it the standout pick.


8

Erwähnte Wertpapiere

LITLumentum Holdings Inc.
COHRCoherent Corp.
CIENCiena Corporation
NVDANVIDIA Corporation
VMEVerse Royalties Inc.

9

Personen

Alex
Host and Analyst
host
Dan Oerity
CEO of Verse Royalties
mentioned

Glossar
KV cacheMemory storage for key-value pairs used by AI models during token generation; can consume most of a GPU's memory for large models.
Co-packaged opticsOptical engines placed in the same package as the chip, shrinking electrical distance to millimeters and cutting power per bit by 30–70%.
Indium phosphide (INP)Semiconductor material ideal for making and detecting laser light in fiber optic networks, used in optical transceivers and modulators.
TransceiverPlug-in module that both transmits and receives data by converting electrical signals to light and back.

Haftungsausschluss: Dies ist eine KI-generierte Zusammenfassung eines YouTube-Videos für Bildungs- und Referenzzwecke. Sie stellt keine Anlage-, Finanz- oder Rechtsberatung dar. Überprüfen Sie Informationen immer anhand der Originalquellen, bevor Sie Entscheidungen treffen. TubeReads ist nicht mit dem Content-Ersteller verbunden.