dstl

TERMINAL

TERMINAL

LIBRARY

LIBRARY

//

The Enterprise AI Gap: Why Diffusion Will Take Longer Than Silicon Valley Thinks

The Enterprise AI Gap: Why Diffusion Will Take Longer Than Silicon Valley Thinks

The Enterprise AI Gap: Why Diffusion Will Take Longer Than Silicon Valley Thinks

a16z

a16z

58:12

58:12

THESIS

Enterprise AI adoption faces a structural bottleneck: the gap between agent capability and organizational readiness will persist far longer than Silicon Valley's startup-centric optimism suggests.

Enterprise AI adoption faces a structural bottleneck: the gap between agent capability and organizational readiness will persist far longer than Silicon Valley's startup-centric optimism suggests.

Enterprise AI adoption faces a structural bottleneck: the gap between agent capability and organizational readiness will persist far longer than Silicon Valley's startup-centric optimism suggests.

ASSET CLASS

ASSET CLASS

SECULAR

SECULAR

CONVICTION

CONVICTION

HIGH

HIGH

TIME HORIZON

TIME HORIZON

3 to 5 years

3 to 5 years

01

01

//

PREMISE

PREMISE

The agent economy requires software built for agents, but enterprise systems of record were built for humans and cannot be restructured quickly

The agent economy requires software built for agents, but enterprise systems of record were built for humans and cannot be restructured quickly

If you have a hundred or a thousand times more agents than people, then your software has to be built for agents. Yet the existing enterprise software stack—SAP, Workday, legacy ERP systems—embeds decades of domain knowledge not just in well-orchestrated data layers but in UI logic, middle tiers, and usage patterns. Startups can build from first principles without legacy constraints, but enterprises face compounding barriers: security and access control problems (agents can be prompt-injected far more easily than humans can be socially engineered), organizational permission structures that break when agents operate autonomously across shared resources, and the fundamental inability to treat agents as independent identities when they remain liability extensions of the humans who deploy them. The result is a structural imbalance where individual developers and startups race ahead while enterprises with the largest economic footprints freeze or move at glacial pace.

If you have a hundred or a thousand times more agents than people, then your software has to be built for agents. Yet the existing enterprise software stack—SAP, Workday, legacy ERP systems—embeds decades of domain knowledge not just in well-orchestrated data layers but in UI logic, middle tiers, and usage patterns. Startups can build from first principles without legacy constraints, but enterprises face compounding barriers: security and access control problems (agents can be prompt-injected far more easily than humans can be socially engineered), organizational permission structures that break when agents operate autonomously across shared resources, and the fundamental inability to treat agents as independent identities when they remain liability extensions of the humans who deploy them. The result is a structural imbalance where individual developers and startups race ahead while enterprises with the largest economic footprints freeze or move at glacial pace.

02

02

//

MECHANISM

MECHANISM

Enterprise security constraints, organizational inertia, and the unsolved agent identity problem create a forcing function that delays AI diffusion at scale

Enterprise security constraints, organizational inertia, and the unsolved agent identity problem create a forcing function that delays AI diffusion at scale

The mechanism operates through several reinforcing loops. First, agents cannot yet be trusted with confidential information because anything in a context window can theoretically be prompt-extracted, meaning enterprises cannot grant agents access to sensitive resources like M&A data rooms. Second, the abstraction layer problem means most knowledge workers cannot think algorithmically enough to direct agents effectively—the spreadsheet analogy shows this capability adoption takes years, not months. Third, CFOs and CIOs are actively resistant to letting agents perform integration on demand because unsupervised creation of new API connections between systems threatens system-of-record integrity. Fourth, the engineering compute budget question—how much token spend to allocate per engineer—has no established framework, creating paralysis at the executive level. These constraints compound: enterprises will restrict agent access, which limits agent utility, which slows the feedback loop that would otherwise accelerate adoption. Meanwhile, startups without these constraints will move faster, creating visible competitive pressure that eventually forces enterprise adoption—but only after standards, norms, and security solutions emerge through iteration, much as open source licensing norms took years to develop.

The mechanism operates through several reinforcing loops. First, agents cannot yet be trusted with confidential information because anything in a context window can theoretically be prompt-extracted, meaning enterprises cannot grant agents access to sensitive resources like M&A data rooms. Second, the abstraction layer problem means most knowledge workers cannot think algorithmically enough to direct agents effectively—the spreadsheet analogy shows this capability adoption takes years, not months. Third, CFOs and CIOs are actively resistant to letting agents perform integration on demand because unsupervised creation of new API connections between systems threatens system-of-record integrity. Fourth, the engineering compute budget question—how much token spend to allocate per engineer—has no established framework, creating paralysis at the executive level. These constraints compound: enterprises will restrict agent access, which limits agent utility, which slows the feedback loop that would otherwise accelerate adoption. Meanwhile, startups without these constraints will move faster, creating visible competitive pressure that eventually forces enterprise adoption—but only after standards, norms, and security solutions emerge through iteration, much as open source licensing norms took years to develop.

03

03

//

OUTCOME

OUTCOME

A multi-year bifurcation between startup AI adoption velocity and enterprise AI adoption, with enterprise software incumbents retaining position longer than consensus expects

A multi-year bifurcation between startup AI adoption velocity and enterprise AI adoption, with enterprise software incumbents retaining position longer than consensus expects

The market outcome is a prolonged transition period where enterprise software incumbents like SAP are not displaced—it is 'absurd to think you're going to vibe code your way to SAP'—while startups and advanced individuals gain disproportionate leverage. Companies willing to make the bet on AI spend will lead their categories, but the majority of large enterprises will lag significantly. The Wall Street consensus is wrong in two directions simultaneously: wrong about the economics being zero-sum (the opportunity is at least an order of magnitude larger than current models assume, just as cloud computing vastly exceeded the server business it supposedly replaced), and wrong about the speed of enterprise adoption (the diffusion will take longer than Silicon Valley projects). Software companies that build high-quality APIs, solve agent identity and access control, and enable agent-native workflows will capture outsized value. Usage-based pricing will become the dominant model, and the engineering compute budget conversation will be the defining corporate finance challenge of the next few years. Eventually, just as MIPS pricing collapsed for IBM mainframes, token economics will be resolved through capacity expansion, algorithmic efficiency gains, or hardware breakthroughs—but the transition period will be painful and longer than expected.

The market outcome is a prolonged transition period where enterprise software incumbents like SAP are not displaced—it is 'absurd to think you're going to vibe code your way to SAP'—while startups and advanced individuals gain disproportionate leverage. Companies willing to make the bet on AI spend will lead their categories, but the majority of large enterprises will lag significantly. The Wall Street consensus is wrong in two directions simultaneously: wrong about the economics being zero-sum (the opportunity is at least an order of magnitude larger than current models assume, just as cloud computing vastly exceeded the server business it supposedly replaced), and wrong about the speed of enterprise adoption (the diffusion will take longer than Silicon Valley projects). Software companies that build high-quality APIs, solve agent identity and access control, and enable agent-native workflows will capture outsized value. Usage-based pricing will become the dominant model, and the engineering compute budget conversation will be the defining corporate finance challenge of the next few years. Eventually, just as MIPS pricing collapsed for IBM mainframes, token economics will be resolved through capacity expansion, algorithmic efficiency gains, or hardware breakthroughs—but the transition period will be painful and longer than expected.

//

NECESSARY CONDITION

Regulatory frameworks must remain permissive to innovation (avoiding the 'European' model) and open source development must remain unencumbered by downstream liability.

The biggest problem right now is everybody is trying to figure out the economics of all of this when they're off by at least an order of magnitude on how big the opportunity is.

The biggest problem right now is everybody is trying to figure out the economics of all of this when they're off by at least an order of magnitude on how big the opportunity is.

27:45

RISK

Steel Man Counter-Thesis

The strongest counter-thesis is that the AI agent revolution will follow the trajectory of enterprise mobility or blockchain — generating enormous hype, real technical capability, and genuine early adopter success stories, while ultimately failing to transform enterprise operations at the pace and scale the thesis requires. The core mechanism of failure is a three-way collision: (1) Unresolved agent security creates a rational freeze at the enterprise level, where the majority of software spend and system-of-record authority resides. Unlike the open source parallel cited, open source security risks were bounded by code review and licensing — agent security risks are unbounded because prompt injection is an unsolved computer science problem, not a governance problem. (2) Domain complexity in legacy enterprise systems is not a data access problem but an institutional knowledge problem encoded in workflows, UI logic, and organizational processes that resist extraction into agent-consumable formats. The 25-year history of enterprise integration (SOA, ESBs, microservices, APIs) demonstrates that each new integration paradigm promises to solve this and merely adds another layer. Agents will add yet another layer, not collapse existing ones. (3) The compute economics argument assumes a supply-side resolution (more data centers, cheaper inference) that faces real physical constraints: energy infrastructure, chip manufacturing capacity, and the thermodynamic limits of current architectures. Unlike the PC or cloud transitions, where marginal compute cost fell predictably via Moore's Law, LLM inference costs are tied to model size which is increasing, not decreasing. The Jevons paradox means cheaper tokens drive more usage, potentially keeping total costs elevated or growing. The net result: individual developers and small startups capture enormous productivity gains (the Anthropic growth marketer archetype), but the enterprise transformation that would justify current AI infrastructure investment valuations and drive the 100-1000x agent volume thesis does not materialize within the investment horizon that matters. The 'order of magnitude' bigger opportunity exists in theory but remains locked behind institutional, security, and economic barriers that take 10-15 years to resolve — by which time the current generation of AI companies will have either run out of capital or been acquired at distressed valuations, much like the first generation of cloud companies that were right about the future but wrong about the timing.

//

RISK 01

RISK 01

Enterprise Security and Agent Governance Paralysis Delays Adoption Indefinitely

Enterprise Security and Agent Governance Paralysis Delays Adoption Indefinitely

THESIS

The thesis assumes that AI agent diffusion into enterprise software stacks is inevitable and will eventually overcome institutional resistance. However, the conversation itself surfaces a devastating structural problem: agents cannot be treated as humans (they have no privacy rights, the principal bears full liability, prompt injection can exfiltrate context window contents, and there is no known solution to preventing information leakage). CFOs and CIOs at a recent event reacted with visceral hostility to the idea that agent-driven integration would become easier. If enterprises lock down systems — which is the rational response to unresolved security vulnerabilities — the flywheel of agent adoption stalls. The bull case requires agent volume to reach 100-1000x human users, but if the largest organizations (which control the majority of enterprise software spend) refuse to open their systems, the volume thesis never materializes at the scale required to force software stack transformation.

The thesis assumes that AI agent diffusion into enterprise software stacks is inevitable and will eventually overcome institutional resistance. However, the conversation itself surfaces a devastating structural problem: agents cannot be treated as humans (they have no privacy rights, the principal bears full liability, prompt injection can exfiltrate context window contents, and there is no known solution to preventing information leakage). CFOs and CIOs at a recent event reacted with visceral hostility to the idea that agent-driven integration would become easier. If enterprises lock down systems — which is the rational response to unresolved security vulnerabilities — the flywheel of agent adoption stalls. The bull case requires agent volume to reach 100-1000x human users, but if the largest organizations (which control the majority of enterprise software spend) refuse to open their systems, the volume thesis never materializes at the scale required to force software stack transformation.

DEFENSE

The speakers explicitly acknowledge this risk. They note that enterprises will 'close everything off' and that JP Morgan will be 'the slowest.' They frame it as a temporary lag, drawing analogies to open source governance debates and the capex-to-opex cloud transition. Their defense is that historical precedent shows these institutional barriers eventually fall. However, they provide no concrete mechanism or timeline for how the agent security problem (particularly prompt injection and context window exfiltration) gets solved, making this more of a faith-based defense than an engineering-grounded one.

The speakers explicitly acknowledge this risk. They note that enterprises will 'close everything off' and that JP Morgan will be 'the slowest.' They frame it as a temporary lag, drawing analogies to open source governance debates and the capex-to-opex cloud transition. Their defense is that historical precedent shows these institutional barriers eventually fall. However, they provide no concrete mechanism or timeline for how the agent security problem (particularly prompt injection and context window exfiltration) gets solved, making this more of a faith-based defense than an engineering-grounded one.

//

RISK 02

RISK 02

The Abstraction Layer Collapse Fallacy — Domain Complexity as a Permanent Moat

The Abstraction Layer Collapse Fallacy — Domain Complexity as a Permanent Moat

THESIS

The thesis posits that agents will eventually force legacy enterprise systems to rebuild for agent-native interfaces, and that software stacks will be selected based on agent accessibility. However, one speaker directly undermines this by noting that SAP's domain knowledge is not represented in a 'well-orchestrated data layer' — it is embedded in UI logic, middle tiers, organizational workflows, and decades of accumulated edge cases. The analogy to the spreadsheet adoption cycle is instructive but cuts both ways: the spreadsheet replaced the calculator because the abstraction was clean and self-contained. Enterprise systems like SAP, Workday, and legacy ERPs are not clean abstractions — they are deeply entangled organizational artifacts. If agents cannot penetrate these systems without generating fragmented shadow systems of record (which the speakers explicitly warn about), the result is not transformation but chaos. The 'vibe code your way to SAP' critique is never actually refuted.

The thesis posits that agents will eventually force legacy enterprise systems to rebuild for agent-native interfaces, and that software stacks will be selected based on agent accessibility. However, one speaker directly undermines this by noting that SAP's domain knowledge is not represented in a 'well-orchestrated data layer' — it is embedded in UI logic, middle tiers, organizational workflows, and decades of accumulated edge cases. The analogy to the spreadsheet adoption cycle is instructive but cuts both ways: the spreadsheet replaced the calculator because the abstraction was clean and self-contained. Enterprise systems like SAP, Workday, and legacy ERPs are not clean abstractions — they are deeply entangled organizational artifacts. If agents cannot penetrate these systems without generating fragmented shadow systems of record (which the speakers explicitly warn about), the result is not transformation but chaos. The 'vibe code your way to SAP' critique is never actually refuted.

DEFENSE

This is a genuine blind spot because while the speakers acknowledge that SAP 'isn't going anywhere' and that domain complexity is enormous, they then pivot to the long-term inevitability argument without addressing the intermediate failure mode. The risk is not that SAP survives forever unchanged — it is that the 10-15 year transition period creates massive value destruction through fragmented agent-generated shadow systems, broken integrations, and compliance failures. No speaker offers a concrete pathway for how agents penetrate deeply embedded domain logic that lives outside structured data layers. The conversation essentially hand-waves past this with 'enough generations later the agent might just say you need to rip out your legacy HR system,' which is an aspiration, not a mechanism.

This is a genuine blind spot because while the speakers acknowledge that SAP 'isn't going anywhere' and that domain complexity is enormous, they then pivot to the long-term inevitability argument without addressing the intermediate failure mode. The risk is not that SAP survives forever unchanged — it is that the 10-15 year transition period creates massive value destruction through fragmented agent-generated shadow systems, broken integrations, and compliance failures. No speaker offers a concrete pathway for how agents penetrate deeply embedded domain logic that lives outside structured data layers. The conversation essentially hand-waves past this with 'enough generations later the agent might just say you need to rip out your legacy HR system,' which is an aspiration, not a mechanism.

//

RISK 03

RISK 03

Engineering Compute Cost Spiral Destroys Unit Economics Before Scale Benefits Materialize

Engineering Compute Cost Spiral Destroys Unit Economics Before Scale Benefits Materialize

THESIS

The thesis assumes that token costs and compute budgets are a transient problem that will resolve through supply expansion, algorithmic breakthroughs, or a 'transistor moment.' However, the speakers themselves reveal that the current reality is engineers being blocked after three prompts on Claude Code Max, that the difference between compute being 2x engineering team cost versus 3% more is 'all your EPS,' and that no one currently knows how to budget for this. The historical analogies (vacuum tubes, MIPS pricing, cloud transition) all played out over decades, not quarters. If the token cost curve does not decline fast enough — or if usage scales faster than cost declines (Jevons paradox applied to AI compute) — then the companies that bet aggressively on agent-driven workflows will hemorrhage cash. The startups that 'burn through available capital pretending it's not a problem' will die, and the enterprises that froze will have been proven right in the medium term. This is not a theoretical risk: Anthropic's own pricing constraints are already creating friction that the speakers acknowledge in real time.

The thesis assumes that token costs and compute budgets are a transient problem that will resolve through supply expansion, algorithmic breakthroughs, or a 'transistor moment.' However, the speakers themselves reveal that the current reality is engineers being blocked after three prompts on Claude Code Max, that the difference between compute being 2x engineering team cost versus 3% more is 'all your EPS,' and that no one currently knows how to budget for this. The historical analogies (vacuum tubes, MIPS pricing, cloud transition) all played out over decades, not quarters. If the token cost curve does not decline fast enough — or if usage scales faster than cost declines (Jevons paradox applied to AI compute) — then the companies that bet aggressively on agent-driven workflows will hemorrhage cash. The startups that 'burn through available capital pretending it's not a problem' will die, and the enterprises that froze will have been proven right in the medium term. This is not a theoretical risk: Anthropic's own pricing constraints are already creating friction that the speakers acknowledge in real time.

DEFENSE

The speakers address this extensively through historical analogy, arguing that every prior compute paradigm (mainframes, PCs, cloud) faced identical cost concerns that were resolved by supply expansion and demand elasticity exceeding expectations. They invoke the Benioff math — if an enterprise salesperson costs $1M/year, their tooling budget can be substantial. They also argue that Wall Street's fixed-revenue-pie thinking systematically underestimates the opportunity by at least an order of magnitude. However, the defense relies entirely on pattern-matching to prior technology cycles without addressing the specific structural difference this time: the marginal cost of agent compute scales with usage in a way that prior tools (spreadsheets, cloud VMs) did not, because every agent interaction burns inference tokens with no equivalent of Moore's Law yet proven for LLM inference costs at the required rate.

The speakers address this extensively through historical analogy, arguing that every prior compute paradigm (mainframes, PCs, cloud) faced identical cost concerns that were resolved by supply expansion and demand elasticity exceeding expectations. They invoke the Benioff math — if an enterprise salesperson costs $1M/year, their tooling budget can be substantial. They also argue that Wall Street's fixed-revenue-pie thinking systematically underestimates the opportunity by at least an order of magnitude. However, the defense relies entirely on pattern-matching to prior technology cycles without addressing the specific structural difference this time: the marginal cost of agent compute scales with usage in a way that prior tools (spreadsheets, cloud VMs) did not, because every agent interaction burns inference tokens with no equivalent of Moore's Law yet proven for LLM inference costs at the required rate.

//

ASYMMETRIC SKEW

Downside is concentrated and near-term: enterprise adoption freezes due to unresolved agent security, compute costs spiral beyond budgets before supply catches up, and legacy system complexity creates a 5-10 year adoption lag that current valuations do not price in. Upside is diffuse and long-term: if all three barriers resolve simultaneously (security solved, compute costs drop dramatically, legacy systems open up), the opportunity is genuinely an order of magnitude larger than current models suggest. The skew is approximately 3:1 downside-weighted on a 3-5 year horizon, inverting to 1:3 upside-weighted on a 10+ year horizon. The critical asymmetry is that most capital is deployed on the shorter timeframe where downside dominates.

ALPHA

NOISE

The Consensus

The market believes AI will rapidly diffuse across enterprises, that agentic AI will quickly transform business operations, and that the economics of AI infrastructure (tokens, compute) represent a constrained, zero-sum cost problem that must be carefully budgeted against existing revenue models. Wall Street models treat the AI opportunity as a fixed revenue pie where existing SaaS economics are being disrupted and cannibalized, with cloud/AI spend requiring tight financial controls. The consensus also holds that agents will primarily interact through purpose-built interfaces (APIs, IDLs) and that the shift to agent-based software is primarily a marketing/interface problem.

The market's logic chain: AI agents will use software through well-designed APIs and interfaces → SaaS companies need to build agent-friendly interfaces to survive → the economics are governed by token costs and compute budgets → enterprises need strict budgeting controls → AI adoption is primarily a cost management and interface design challenge → existing SaaS revenue models are under pressure from agents that only want data access, not the full software stack.

SIGNAL

The Variant

The speakers collectively argue that AI diffusion into enterprises will take significantly longer than Silicon Valley expects, primarily because of organizational complexity, security/access control challenges, and the deep domain knowledge embedded in legacy systems that cannot be easily replicated or bypassed. Critically, the market is off by 'at least an order of magnitude' on the size of the opportunity — not the cost side, but the demand side. The speakers believe that when agents outnumber humans by 100x or 1000x, entirely new business models and consumption patterns will emerge that dwarf current revenue assumptions, just as cloud computing expanded server demand by orders of magnitude rather than merely shifting it. The current Wall Street fixation on token budgets and compute costs is a transient problem that will be solved by capacity expansion, algorithmic breakthroughs, or hardware changes — analogous to the vacuum tube-to-transistor transition or the MIPS pricing collapse at IBM.

The speakers' logic chain operates on several interconnected contrarian premises: (1) Agents don't actually choose tools based on interface quality — they choose based on semantic understanding of cost parameters, durability, and collective wisdom about platform quality, so the 'build something agents want' framing is about building better systems, not better APIs. (2) The abstraction layer always moves up, never collapses — every new technology (spreadsheets, open source, cloud) follows the same pattern where the job moves up a rung and the specialized skill that seemed essential gets commoditized, but new specialized skills emerge. (3) The real bottleneck is not technology but human organizational capacity: most people cannot think algorithmically, cannot create flowcharts of their own workflows, and the 'rocket scientist' requirement for orchestrating agents today will evaporate but the domain expertise requirement will intensify. (4) Enterprise adoption will be dramatically slower than startup adoption because of legitimate security concerns — agents cannot yet reliably contain information in context windows, prompt injection makes them easier to social engineer than humans, and you cannot treat agents as independent entities because they carry the liability of their principals. (5) The compute cost problem is transient and will resolve through the same mechanisms as every prior technology cycle — more supply, algorithmic efficiency, hardware breakthroughs — and the real economic question is not 'can we afford the tokens' but 'what is the tooling worth relative to the human it augments,' which makes the spend obviously justifiable. (6) Legacy systems like SAP encode domain knowledge not just in data layers but in UI, middle tiers, and usage patterns — 'vibe coding your way to SAP' is absurd — which means the diffusion of AI onto those data sources will be slow regardless of agent sophistication.

SOURCE OF THE EDGE

The speakers' edge draws from three distinct sources of varying credibility. First, Aaron Levie (Box CEO) has a genuine structural informational advantage as an operator building agent infrastructure in real-time — his observations about agents preferring computer use over code generation, the coordination challenges of thousands of agents hitting shared systems, and the specific security problems with context window information leakage come from direct product development experience, not theory. This is a credible edge. Second, the enterprise-experienced speaker (the skeptic/quibbler) brings deep institutional knowledge from decades of enterprise IT transformation — his analogies to spreadsheet adoption, open source governance debates, and the MBA cousin story reflect pattern-matching from lived experience across multiple technology cycles. His edge is strongest when arguing about organizational inertia and the abstraction layer thesis, though it occasionally veers into over-extrapolation from historical analogy. Third, the investor speaker claiming portfolio-wide asymptotic growth across 240 infrastructure companies provides a genuine data signal about current demand patterns, though the observation itself (infrastructure usage is spiking) is increasingly consensus rather than variant. The most credible and differentiated edge is the security/access control analysis — the specific insight that agents cannot be treated like humans because they have no privacy rights, their principals bear full liability, and context window information cannot be reliably secured against prompt injection. This is a practitioner observation that contradicts the naive 'just treat agents like employees' framework being widely circulated, and it comes from actually trying to build these systems. The weakest claimed edge is the historical analogy framework (vacuum tubes, MIPS, mainframes) — while directionally useful, it functions more as rhetorical scaffolding than as a genuine informational advantage. Anyone can draw these analogies; the question is whether this specific transition follows the same pattern, and the speakers acknowledge they don't have the answer.

//

CONVICTION DETECTED

• It's just absurd to think you're going to vibe code your way to like SAP • They're off by at least an order of magnitude on how big the opportunity is • I actually think that's almost exactly wrong • They are going to become the people who lead in the space • That's going to happen guaranteed • It's absolutely worth it • There's absolutely no doubt that this just goes away • 100% (used twice in agreement) • The rocket science part of it just is going to evaporate in very short order • It drives me absolutely bonkers that the Wall Street models have this fixed revenue pie • You have to be an absolute rocket scientist • SAP isn't going anywhere • Nobody is going to go in and because they're so terrified... is just crazy

//

HEDGE DETECTED

• I'm afraid of saying something that... • Maybe the agent gets better and better over time at being able to nudge you in that direction • I don't know that we know of a way to solve that at the moment • We don't yet know how to give it an M&A data room to fully securely • I'm a little skeptical of the until the whole like non-reproducible non-random element of this AI stuff goes away • Maybe there'll be a couple holdouts • I don't even have any amazing data points or anecdotes • We will have to know the answer (conceding uncertainty on compute budgets) • I think on the 10-year time frame • I actually probably believe it • I'm more in the like a cloud maximalist camp (hedging personal position) • It might just be more supply the way we think of it but it also might be an actual algorithmic fundamental change The ratio here is notable: the speakers display high conviction on directional outcomes (AI opportunity is massively underestimated, diffusion will be slow, legacy systems persist) while hedging significantly on timing and mechanism (how security gets solved, when compute costs normalize, what specific business models emerge). This pattern is consistent with genuine expertise rather than performed certainty — operators who have lived through multiple technology cycles tend to be very confident about structural dynamics while remaining humble about timing. The hedging on security and access control problems is particularly credible because it comes from the same speaker (Levie) who is most bullish on the opportunity, suggesting internal intellectual honesty rather than narrative construction. The one speaker who says 'guaranteed' about MIPS-style cost collapse immediately undercuts it with 'I actually probably believe it' — a tell that even the strongest conviction claims carry self-aware uncertainty. Weight the directional thesis heavily; discount any specific timeline claims.