dstl

TERMINAL

TERMINAL

LIBRARY

LIBRARY

//

AI Infrastructure Wars: Token Efficiency, Humanoid Robots, and the Energy Constraint That Could Prevent a Bubble

AI Infrastructure Wars: Token Efficiency, Humanoid Robots, and the Energy Constraint That Could Prevent a Bubble

AI Infrastructure Wars: Token Efficiency, Humanoid Robots, and the Energy Constraint That Could Prevent a Bubble

iConnections

iConnections

29:17

29:17

11K Views

11K Views

THESIS

The power shortage that terrifies most investors may actually prevent the catastrophic AI overbuild that has destroyed value in every previous technology cycle.

The power shortage that terrifies most investors may actually prevent the catastrophic AI overbuild that has destroyed value in every previous technology cycle.

The power shortage that terrifies most investors may actually prevent the catastrophic AI overbuild that has destroyed value in every previous technology cycle.

ASSET CLASS

ASSET CLASS

SECULAR

SECULAR

CONVICTION

CONVICTION

HIGH

HIGH

TIME HORIZON

TIME HORIZON

3 to 5 years

3 to 5 years

01

01

//

PREMISE

PREMISE

Historical technology cycles follow a predictable pattern of financial bubbles leading to massive overbuilds and subsequent crashes

Historical technology cycles follow a predictable pattern of financial bubbles leading to massive overbuilds and subsequent crashes

Every fundamentally new technology over the last 100 years has produced a financial bubble, from the South Sea Bubble enabled by understanding longitude, through railroad, canal, automobile, TV, and internet bubbles. The mechanism is consistent: capital floods into new technology, infrastructure gets overbuilt relative to near-term demand, and crashes follow. This is not a bug but a feature of how markets price transformative technologies. The AI cycle appears to be following the same pattern with massive capital deployment into data centers, chips, and model training.

Every fundamentally new technology over the last 100 years has produced a financial bubble, from the South Sea Bubble enabled by understanding longitude, through railroad, canal, automobile, TV, and internet bubbles. The mechanism is consistent: capital floods into new technology, infrastructure gets overbuilt relative to near-term demand, and crashes follow. This is not a bug but a feature of how markets price transformative technologies. The AI cycle appears to be following the same pattern with massive capital deployment into data centers, chips, and model training.

02

02

//

MECHANISM

MECHANISM

Physical constraints on watts and wafers create a ceiling on AI infrastructure buildout that previous technology cycles lacked

Physical constraints on watts and wafers create a ceiling on AI infrastructure buildout that previous technology cycles lacked

Unlike prior technology waves, AI faces two hard physical constraints that cannot be quickly overcome. First, the US faces severe power shortages with no domestic uranium enrichment capacity, requiring Russian imports for nuclear fuel, and limited ability to rapidly deploy new generation. Second, Taiwan controls advanced chip manufacturing and will not produce wafers beyond certain volumes regardless of price. These constraints mean that even if capital wants to overbuild AI infrastructure, it physically cannot. The power shortage forces efficiency innovations like Anthropic's token efficiency, Tesla's edge computing at human brain power levels, and SpaceX's concept of racks in space connected by laser links.

Unlike prior technology waves, AI faces two hard physical constraints that cannot be quickly overcome. First, the US faces severe power shortages with no domestic uranium enrichment capacity, requiring Russian imports for nuclear fuel, and limited ability to rapidly deploy new generation. Second, Taiwan controls advanced chip manufacturing and will not produce wafers beyond certain volumes regardless of price. These constraints mean that even if capital wants to overbuild AI infrastructure, it physically cannot. The power shortage forces efficiency innovations like Anthropic's token efficiency, Tesla's edge computing at human brain power levels, and SpaceX's concept of racks in space connected by laser links.

03

03

//

OUTCOME

OUTCOME

A smoother, longer AI investment cycle with reduced crash risk compared to historical technology bubbles

A smoother, longer AI investment cycle with reduced crash risk compared to historical technology bubbles

The physical constraints on buildout should produce a more sustainable AI investment cycle. Rather than the boom-bust pattern of previous technology waves, the energy and wafer constraints create a natural governor on speculation. This benefits long-term investors by reducing the probability of catastrophic overbuilding. The cycle may feature smaller bubbles in specific segments rather than one massive bubble. Companies that solve the efficiency problem, whether through token efficiency like Anthropic, edge computing like Tesla, or novel deployment like SpaceX's space-based compute, capture disproportionate value because they can grow within the constraints.

The physical constraints on buildout should produce a more sustainable AI investment cycle. Rather than the boom-bust pattern of previous technology waves, the energy and wafer constraints create a natural governor on speculation. This benefits long-term investors by reducing the probability of catastrophic overbuilding. The cycle may feature smaller bubbles in specific segments rather than one massive bubble. Companies that solve the efficiency problem, whether through token efficiency like Anthropic, edge computing like Tesla, or novel deployment like SpaceX's space-based compute, capture disproportionate value because they can grow within the constraints.

//

NECESSARY CONDITION

Regulatory frameworks must remain permissive to innovation (avoiding the 'European' model) and open source development must remain unencumbered by downstream liability.

If we can't overbuild because we don't have enough energy and we don't have enough wafers because Taiwan simply won't make the wafers like, I think we could have a smoother for longer AI cycle, and that is good for everyone in this room and avoid the bubble or in a bubble.

If we can't overbuild because we don't have enough energy and we don't have enough wafers because Taiwan simply won't make the wafers like, I think we could have a smoother for longer AI cycle, and that is good for everyone in this room and avoid the bubble or in a bubble.

30:45

RISK

Steel Man Counter-Thesis

The AI infrastructure investment thesis rests on three assumptions that may prove simultaneously false. First, the thesis assumes American AI labs maintain technological leadership, yet the speakers themselves acknowledge Chinese competitors like DarkSS are achieving comparable or superior results in robotics - the claim that Tesla is the most advanced AI in the world because it runs on inferior edge chips is a reframing of necessity as virtue. Second, the energy and wafer constraints the speakers cite as bubble-preventing mechanisms are not symmetric: China has demonstrated it can build energy capacity faster through state coordination, and TSMC's wafer allocation decisions are geopolitically contingent. If Taiwan prioritizes strategic relationships or faces coercion, the wafer constraint becomes an American vulnerability, not a shared limitation. Third, the token efficiency advantages at Anthropic and cost advantages at XAI assume these firms can maintain discipline while burning through capital in a competitive race - OpenAI's acquisition of key Anthropic talent and the need to match capability benchmarks creates pressure to abandon efficiency for capability. The most credible counter-thesis is that the current AI investment cycle follows the historical pattern exactly: the speakers' optimism that constraints prevent overbuild is the same optimism expressed in every prior bubble. The constraints they cite - watts and wafers - are supply constraints that can be overcome with sufficient capital and time, meaning they delay rather than prevent the overbuild. Meanwhile, the demand side faces its own constraints: enterprise AI adoption is slower than consumer adoption, the productivity gains from coding assistants accrue to labor not capital, and the displacement of SaaS revenue may simply redistribute rather than create value. The smart money should consider that Valor and Atreides are talking their book at a conference, and the appropriate response to their optimism is to ask what they would need to see to change their view.

//

RISK 01

RISK 01

AI-Induced Software Value Destruction Creates Systemic Financial Contagion

AI-Induced Software Value Destruction Creates Systemic Financial Contagion

THESIS

The speakers acknowledge that AI-enabled vibe coding threatens traditional SaaS businesses, with HubSpot down from $850 to low $200s. However, they underestimate the second-order effects: private equity firms have loaded these software companies with 3-4x leverage. If AI commoditizes software faster than expected, widespread covenant breaches and defaults in leveraged software buyouts could trigger broader credit market stress. The speakers note they don't play in these markets but express concern - yet their core investments in AI enablers depend on a functioning capital markets ecosystem that could be disrupted by this very dynamic.

The speakers acknowledge that AI-enabled vibe coding threatens traditional SaaS businesses, with HubSpot down from $850 to low $200s. However, they underestimate the second-order effects: private equity firms have loaded these software companies with 3-4x leverage. If AI commoditizes software faster than expected, widespread covenant breaches and defaults in leveraged software buyouts could trigger broader credit market stress. The speakers note they don't play in these markets but express concern - yet their core investments in AI enablers depend on a functioning capital markets ecosystem that could be disrupted by this very dynamic.

DEFENSE

Gracias explicitly flags this risk, stating he would be very worried holding high-yield securities for leveraged software buyouts done 2-3 years ago. However, the defense is limited to acknowledgment rather than analysis of how this contagion might affect their own portfolio companies or AI infrastructure investments.

Gracias explicitly flags this risk, stating he would be very worried holding high-yield securities for leveraged software buyouts done 2-3 years ago. However, the defense is limited to acknowledgment rather than analysis of how this contagion might affect their own portfolio companies or AI infrastructure investments.

//

RISK 02

RISK 02

Token Efficiency Leadership Is Transient and Commoditizable

Token Efficiency Leadership Is Transient and Commoditizable

THESIS

Baker builds a core investment thesis around Anthropic's token efficiency and XAI's cost per token advantages. However, token efficiency is a function of architecture choices and optimization techniques that can be replicated. Google, OpenAI, and Chinese labs are all racing on the same metrics. The speaker cites current efficiency ratios as durable competitive advantages, but the history of AI benchmarks shows rapid convergence. What looks like a 2x efficiency advantage today could be matched in 6-12 months through architectural innovations, distillation techniques, or simply more compute thrown at optimization.

Baker builds a core investment thesis around Anthropic's token efficiency and XAI's cost per token advantages. However, token efficiency is a function of architecture choices and optimization techniques that can be replicated. Google, OpenAI, and Chinese labs are all racing on the same metrics. The speaker cites current efficiency ratios as durable competitive advantages, but the history of AI benchmarks shows rapid convergence. What looks like a 2x efficiency advantage today could be matched in 6-12 months through architectural innovations, distillation techniques, or simply more compute thrown at optimization.

DEFENSE

The speakers present token efficiency as a structural moat without acknowledging the rapid pace at which these metrics have historically converged across labs. No discussion of how quickly competitors might close efficiency gaps or whether efficiency leadership translates to sustained pricing power and market share.

The speakers present token efficiency as a structural moat without acknowledging the rapid pace at which these metrics have historically converged across labs. No discussion of how quickly competitors might close efficiency gaps or whether efficiency leadership translates to sustained pricing power and market share.

//

RISK 03

RISK 03

Geopolitical Weaponization Thesis Inverts: Regulation Becomes Inevitable Post-Incident

Geopolitical Weaponization Thesis Inverts: Regulation Becomes Inevitable Post-Incident

THESIS

Baker argues geopolitical fears have been weaponized such that we will not regulate AI because China is not regulating. This assumes the current policy equilibrium persists. However, a single catastrophic AI incident - whether an autonomous vehicle fatality at scale, a coding agent causing critical infrastructure failure like the AWS outage mentioned, or an AI system causing financial market disruption - could instantly flip the political calculus. The very autonomy and capability gains the speakers celebrate increase the probability of such incidents. Post-incident regulation could be severe and indiscriminate, potentially halting deployments or requiring costly compliance regimes that destroy the capital efficiency advantages underlying the thesis.

Baker argues geopolitical fears have been weaponized such that we will not regulate AI because China is not regulating. This assumes the current policy equilibrium persists. However, a single catastrophic AI incident - whether an autonomous vehicle fatality at scale, a coding agent causing critical infrastructure failure like the AWS outage mentioned, or an AI system causing financial market disruption - could instantly flip the political calculus. The very autonomy and capability gains the speakers celebrate increase the probability of such incidents. Post-incident regulation could be severe and indiscriminate, potentially halting deployments or requiring costly compliance regimes that destroy the capital efficiency advantages underlying the thesis.

DEFENSE

The speakers dismiss regulatory risk based on current geopolitical dynamics without modeling how a high-profile incident could trigger rapid policy reversal. The AWS outage example they cite - where an AI agent deleted critical code causing a 14-hour outage - is exactly the type of precursor event that could catalyze regulatory action if it occurred at greater scale or with more visible consequences.

The speakers dismiss regulatory risk based on current geopolitical dynamics without modeling how a high-profile incident could trigger rapid policy reversal. The AWS outage example they cite - where an AI agent deleted critical code causing a 14-hour outage - is exactly the type of precursor event that could catalyze regulatory action if it occurred at greater scale or with more visible consequences.

//

ASYMMETRIC SKEW

Downside scenarios include credit contagion from leveraged software buyouts, rapid commoditization of efficiency advantages, regulatory shock post-incident, and geopolitical disruption to wafer supply. Upside scenarios depend on sustained American AI leadership, continued absence of catastrophic incidents, and energy/wafer constraints preventing competitive overbuild. The asymmetry skews negative because the downside scenarios are correlated and mutually reinforcing while the upside scenarios require multiple independent conditions to persist simultaneously.

ALPHA

NOISE

The Consensus

The market consensus holds that AI will fundamentally disrupt traditional software businesses (reflected in valuations collapsing from 8x to 3x sales), that energy constraints represent a binding constraint on AI development, and that the US faces significant challenges competing with China across robotics, drones, and strategic technology. The consensus view treats these as structural headwinds requiring years of policy and infrastructure solutions to address.

The market reasons that software disruption is inevitable and already priced in, that AI development requires massive centralized data centers consuming hundreds of megawatts, that humanoid robots versus specialized robots is an unresolved debate, and that nuclear energy in the US faces insurmountable regulatory and political barriers. The consensus causality chain runs: AI advancement requires power → power requires infrastructure → infrastructure requires years → China moves faster.

SIGNAL

The Variant

The speakers believe the US will solve its energy and competitive challenges through American ingenuity and scrappiness rather than top-down planning. They argue stranded power capacity exists throughout the country waiting to be unlocked, that space-based computing represents a near-term viable solution rather than science fiction, and that Tesla's edge AI represents the most capital-efficient AI in the world today. Critically, they view the power shortage as investment-positive because it prevents the overbuild cycle that typically follows technological bubbles, potentially creating a smoother, longer AI cycle rather than boom-bust.

The speakers' logic inverts several consensus assumptions. First, Tesla's edge AI demonstrates that superhuman performance can be achieved at 20-50W rather than megawatts, suggesting the power constraint may be a solvable optimization problem rather than a physics barrier. Second, humanoid robots learning from videos of humans has decisively ended the humanoid versus specialized robot debate in favor of humanoid form factors. Third, small modular reactors using the Navy's proven thorium-based approach represent deployable solutions, not theoretical ones. Fourth, data centers in space are not buildings floating in orbit but interconnected racks on Starlink V3 satellites connected by lasers in vacuum, which is faster than fiber. The causality chain becomes: token efficiency + edge computing + distributed space infrastructure = sustainable AI advantage without bubble dynamics.

SOURCE OF THE EDGE

The speakers claim multiple sources of edge: operating experience (Valor has six developers eating their own cooking on AI for four years, seeing 40-50% productivity gains), board-level access (Antonio sits on Tesla's board with direct visibility into Optimus development and FSD performance metrics), and recently added defense expertise (Chris Pavel, former NATO Supreme Allied Commander, joined as partner). The credibility assessment is mixed. The Tesla board position represents genuine structural information advantage on roadmap and capability data unavailable to outside investors. The productivity metrics from internal developers are first-party data but limited sample size. However, several claims warrant skepticism: the assertion that Tesla's autopilot is 'the most advanced AI in the world' doing 'the most complex thing humans do' conflates narrow driving tasks with general intelligence claims that wouldn't survive scrutiny. The space data center thesis, while technically coherent, relies on extrapolations from current Starlink specs that haven't been validated at scale. The nuclear energy optimism ('we know how to do this') glosses over the decade-plus regulatory timeline that has killed every recent US reactor project. The edge is real on Tesla-specific intelligence; it becomes progressively more speculative as claims extend to macro energy and geopolitics.

//

CONVICTION DETECTED

• We are America, okay? We have we inverters. We're going to figure this out. • I'll always bet on Team Blue. • If we don't have Optimus... we are in trouble. • We have to win. I think it's coming. I think it's real. • We need to have at least 40 parity, if not better. • You either evolve now... or you're dead. • These are very important issues. We're working on it very definitely at our firm. • I do think it's coming. • The debate is over now.

//

HEDGE DETECTED

• I don't know if it's later this year or if it's 2027. • I don't know if it really applies just yet. • I might be off by a little on either. • I wouldn't say that. • Who knows when it's already broken is going to break. • I think if you want to think about Tesla... you can ask by the fact that Tesla actually is today the most advanced AI in the world (hedges with conditional framing). • Maybe you have a rack in space that consumes, you know, 100kW. • It's definitely bubble. That's not... it's not like one thing... there's little bubbles here... how big the bubble is the question. The ratio reveals a pattern of high conviction on directional bets (America wins, Tesla leads, humanoid robots dominate) paired with hedging on specific timelines and magnitudes. This is consistent with experienced investors who have learned that being right on direction but wrong on timing is the most common failure mode. The hedging on bubble dynamics at the end is notable: the speakers want to express optimism but cannot suppress their awareness that current valuations may already embed too much enthusiasm. Weight the macro thesis heavily; discount the specific timing predictions.