This is a great deal and the partnership isn’t just about GPUs. On the surface, it looks like guaranteed compute capacity for OpenAI, but in practice it sets off a whole chain reaction across the AI stack.

When OpenAI locks in access to 10+ GW of Nvidia hardware, it doesn’t stop at chips. It drags along servers, networking fabrics, switches, optics, cooling, and power — the full data center build!

That’s why I keep stressing: own the companies that make AI run.


My take:

  • NVIDIA secures long-term demand, regardless of hyperscaler politics.
  • OpenAI gets certainty in capacity — no waiting line.
  • The real beneficiaries also include those selling switches, optics, racks, and power gear. Every watt of inference needs a supply chain.

For me, this marks a shift from hype to execution. When you commit to 10 GW of compute, you’re not experimenting anymore — you’re industrializing AI.

It also reminds me that the market often focuses on the front-end names, but the real leverage is deeper in the stack.

And honestly, this is why I stick to my approach: I’d rather own the picks and shovels that scale with every new model than try to guess which app will win the spotlight.

Read the official announcement →

📈 eToro PI – Follow me on eToro | See full stats on Bullaware