Investing where AI actually runs.
I focus on the picks-and-shovels of AI inference—GPUs, high-speed networking, servers, power & cooling—and the cloud platforms where usage is monetized at scale. I run a concentrated, long-only portfolio (no leverage) with a 5–10 year horizon. The goal is simple: own critical infrastructure and real usage, not passing trends.
Why me
- Usage-linked thesis: Returns should scale with inference, not headlines.
- Concentrated quality: Trim on strength, add to quality on weakness.
- Clear risk discipline: Long-only, no leverage, transparent sizing.
- Weekly notes: Copiers always know the why behind moves.
Strategy — Build & Use
I invest across the AI stack so results scale with real usage.
Build (Infrastructure): NVDA, AVGO, ANET, SMCI, TSM, MRVL, VRT, CRWV, MU, CDNS
Use (Cloud & Platforms): MSFT, AMZN, ORCL
Not “build once”—earn when it’s used.
What I look for
- Durable demand tied to tokens/queries/agents (inference).
- Capacity and capex that actually gets deployed.
- Moats in interconnect, supply chains, software stacks.
- Consistent execution vs. guidance.
Latest updates
Short takes here; full write-ups live on eToro.
- NVIDIA & the inference supply chain — why every watt pulls servers, fabrics, optics, power, cooling
- CoreWeave vs hyperscalers — capacity is the destination; balance sheets differ
- Power & cooling — the quiet compounding layer of 24/7 AI agents
Portfolio snapshot (rounded)
- Build / Infrastructure: NVDA, ANET, AVGO, SMCI, TSM, MRVL, VRT, CRWV, MU, CDNS
- Use / Platforms: MSFT, AMZN, ORCL
Disclaimer: This is not investment advice. Do your own research.
Subscribe — get my Sunday note (5-minute read)
Weekly AI-infra highlights + portfolio notes, Sundays 18:00 CET. No spam.
For educational purposes only. Not investment advice. Investing involves risk; you can lose capital.