dstl

TERMINAL

TERMINAL

LIBRARY

LIBRARY

//

Anthropic's Mythos Leak, OpenAI's Strategic Retreat, and the Golden Age of Cybersecurity

Anthropic's Mythos Leak, OpenAI's Strategic Retreat, and the Golden Age of Cybersecurity

Anthropic's Mythos Leak, OpenAI's Strategic Retreat, and the Golden Age of Cybersecurity

20VC with Harry Stebbings

20VC with Harry Stebbings

1:12:14

1:12:14

15K Views

15K Views

THESIS

The agentic AI explosion is creating the golden age of cybersecurity, yet panicked markets are selling the sector as if AI threatens it rather than fuels it.

The agentic AI explosion is creating the golden age of cybersecurity, yet panicked markets are selling the sector as if AI threatens it rather than fuels it.

The agentic AI explosion is creating the golden age of cybersecurity, yet panicked markets are selling the sector as if AI threatens it rather than fuels it.

ASSET CLASS

ASSET CLASS

SECULAR

SECULAR

CONVICTION

CONVICTION

HIGH

HIGH

TIME HORIZON

TIME HORIZON

3 to 5 years

3 to 5 years

01

01

//

PREMISE

PREMISE

Agentic AI adoption is multiplying attack surfaces and security threats at an exponential rate

Agentic AI adoption is multiplying attack surfaces and security threats at an exponential rate

The shift to autonomous AI agents operating 24/7, combined with the explosion of vibe-coded applications and accelerated software shipping cycles, is creating an unprecedented expansion of cybersecurity threats. Agents are being given root access to systems, making decisions about where to store code and data, and operating with goal-seeking behavior that generates errors at massive scale. The Anthropic Mythos leak itself — where a company building an advanced security-focused AI model suffered an embarrassing data breach — exemplifies the paradox. Applications are being built faster than ever by agents using insecure defaults, PII is leaking at accelerating rates (as seen with the Mercor data breach), and organizations are downloading agents and granting them full system access without adequate security frameworks. Every dimension of the threat landscape — application security, perimeter defense, identity management, code review — is seeing demand expansion.

The shift to autonomous AI agents operating 24/7, combined with the explosion of vibe-coded applications and accelerated software shipping cycles, is creating an unprecedented expansion of cybersecurity threats. Agents are being given root access to systems, making decisions about where to store code and data, and operating with goal-seeking behavior that generates errors at massive scale. The Anthropic Mythos leak itself — where a company building an advanced security-focused AI model suffered an embarrassing data breach — exemplifies the paradox. Applications are being built faster than ever by agents using insecure defaults, PII is leaking at accelerating rates (as seen with the Mercor data breach), and organizations are downloading agents and granting them full system access without adequate security frameworks. Every dimension of the threat landscape — application security, perimeter defense, identity management, code review — is seeing demand expansion.

02

02

//

MECHANISM

MECHANISM

Market panic has created a mispricing by conflating AI capability advancement with displacement of security vendors

Market panic has created a mispricing by conflating AI capability advancement with displacement of security vendors

When Anthropic's Mythos model leaked with claims of extraordinary cybersecurity detection capabilities, cybersecurity stocks dropped 4-9% across the board — CrowdStrike, Palo Alto, Zscaler, Okta, Netskope, and Tanium all declined indiscriminately. However, this sell-off was a baby-with-the-bathwater reaction. Mythos's capabilities are primarily in code-level security analysis, which may impact application security and code review companies, but has no bearing on real-time perimeter defense (firewalls), identity and access management (what Okta does with single sign-on), or endpoint protection. The market is in a panicky stage where any AI advancement triggers indiscriminate selling of adjacent sectors. Meanwhile, the structural reality is the opposite of the narrative: in a world of scarce compute being optimized toward highest-value use cases, CISOs are terrified of agentic threats and will take meetings with any security vendor that afternoon. Anthropic itself is strategically selling fear by releasing Mythos first to CISOs, effectively saying 'here is the new terrifying weapon — please pay us a million dollars to defend against it.' The incumbents like CrowdStrike and Palo Alto Networks have a proven playbook of acquiring emerging winners in new threat categories for $300-500 million before certainty emerges.

When Anthropic's Mythos model leaked with claims of extraordinary cybersecurity detection capabilities, cybersecurity stocks dropped 4-9% across the board — CrowdStrike, Palo Alto, Zscaler, Okta, Netskope, and Tanium all declined indiscriminately. However, this sell-off was a baby-with-the-bathwater reaction. Mythos's capabilities are primarily in code-level security analysis, which may impact application security and code review companies, but has no bearing on real-time perimeter defense (firewalls), identity and access management (what Okta does with single sign-on), or endpoint protection. The market is in a panicky stage where any AI advancement triggers indiscriminate selling of adjacent sectors. Meanwhile, the structural reality is the opposite of the narrative: in a world of scarce compute being optimized toward highest-value use cases, CISOs are terrified of agentic threats and will take meetings with any security vendor that afternoon. Anthropic itself is strategically selling fear by releasing Mythos first to CISOs, effectively saying 'here is the new terrifying weapon — please pay us a million dollars to defend against it.' The incumbents like CrowdStrike and Palo Alto Networks have a proven playbook of acquiring emerging winners in new threat categories for $300-500 million before certainty emerges.

03

03

//

OUTCOME

OUTCOME

Cybersecurity sector represents a structural buying opportunity as threat proliferation drives durable revenue expansion

Cybersecurity sector represents a structural buying opportunity as threat proliferation drives durable revenue expansion

Rather than being disrupted by AI, cybersecurity is entering a golden age where every participant with engineering talent should benefit from the explosion in application production and paradigm shift. The historical pattern in cybersecurity — where new threats consistently create opportunities for both new entrants and incumbents willing to acquire — is accelerating, not diminishing. Companies across the security stack (application security, perimeter defense, identity management, endpoint protection) should see revenue tailwinds as agentic AI adoption compounds the number and sophistication of threats. The current panic-driven selloff creates an opportunity because the market is pricing AI as a substitute for security products when it is actually the primary driver of demand for them. Fast acquisitions of agent-security startups by incumbents will further consolidate value in the established players.

Rather than being disrupted by AI, cybersecurity is entering a golden age where every participant with engineering talent should benefit from the explosion in application production and paradigm shift. The historical pattern in cybersecurity — where new threats consistently create opportunities for both new entrants and incumbents willing to acquire — is accelerating, not diminishing. Companies across the security stack (application security, perimeter defense, identity management, endpoint protection) should see revenue tailwinds as agentic AI adoption compounds the number and sophistication of threats. The current panic-driven selloff creates an opportunity because the market is pricing AI as a substitute for security products when it is actually the primary driver of demand for them. Fast acquisitions of agent-security startups by incumbents will further consolidate value in the established players.

//

NECESSARY CONDITION

Regulatory frameworks must remain permissive to innovation (avoiding the 'European' model) and open source development must remain unencumbered by downstream liability.

This is one where it's just backass barkwards. Um because if you're in the agentic world, this is the golden age of security. The number of security threats and issues is going up orders of magnitude.

This is one where it's just backass barkwards. Um because if you're in the agentic world, this is the golden age of security. The number of security threats and issues is going up orders of magnitude.

26:45

RISK

Steel Man Counter-Thesis

The prevailing thesis is that Anthropic is the disciplined, fast-growing winner pulling away from a dysfunctional OpenAI in a market with limitless demand. The strongest counter-thesis is threefold: First, Anthropic's revenue growth is substantially overstated due to gross revenue recognition on partner channels and industry-wide token reselling — the same token generates ARR at Anthropic, then at AWS, then at Cursor, then at the end customer, creating a Potemkin village of revenue that collapses when profitability becomes the measure. No one on this panel estimated Anthropic's true net revenue, but if AWS takes 20-35% and downstream resellers are recognizing the same tokens, Anthropic's economic revenue could be 40-50% of headline figures. Second, the competitive moat thesis is weak: Claude's capabilities derive primarily from scaling known architectures and training data, not from proprietary defensible technology. The leaked Mythos details — if accurate — show a 10 trillion parameter model, which is an engineering achievement but not a structural moat; any well-capitalized competitor (Google, Meta, xAI) can pursue similar scale. Anthropic has no distribution moat comparable to OpenAI's 500 million monthly users, no platform lock-in comparable to Microsoft's enterprise integration, and no hardware moat comparable to Google's TPUs. Third, the 'disciplined leadership' narrative may be selection bias from a company that simply hasn't been tested yet at OpenAI's scale of public scrutiny, board complexity, and multi-stakeholder management. OpenAI's drama is partly a function of being first and biggest; Anthropic's calm is partly a function of being younger and smaller. History shows that rapid growth creates organizational dysfunction — it doesn't prevent it — and Anthropic's accidental leak is an early signal that operational discipline degrades under velocity pressure. The bear case is that Anthropic is a well-run but undifferentiated model provider with overstated revenue, no distribution advantage, and a safety brand that is one major incident away from evaporating.

//

RISK 01

RISK 01

OpenAI's Dual-Bet Strategy May Succeed Despite Current Narrative of Dysfunction

OpenAI's Dual-Bet Strategy May Succeed Despite Current Narrative of Dysfunction

THESIS

The consensus thesis positions Anthropic as the disciplined, executing juggernaut versus a confused, drama-ridden OpenAI. However, the speakers themselves acknowledge that OpenAI killing Sora was 'almost certainly the right decision' and that the company has now narrowed to exactly two strategic bets: ads-driven consumer monetization and enterprise/coding. This focus, combined with OpenAI's 25 billion ARR run rate versus Anthropic's 19 billion, existing 500 million monthly uniques, and the structural reality that Facebook/Google prove 200 billion+ ad businesses can be built on consumer attention — means OpenAI's execution from a position of clarity could rapidly close any perceived gap. The narrative of dysfunction may be priced in while the actual strategic pivot goes underappreciated.

The consensus thesis positions Anthropic as the disciplined, executing juggernaut versus a confused, drama-ridden OpenAI. However, the speakers themselves acknowledge that OpenAI killing Sora was 'almost certainly the right decision' and that the company has now narrowed to exactly two strategic bets: ads-driven consumer monetization and enterprise/coding. This focus, combined with OpenAI's 25 billion ARR run rate versus Anthropic's 19 billion, existing 500 million monthly uniques, and the structural reality that Facebook/Google prove 200 billion+ ad businesses can be built on consumer attention — means OpenAI's execution from a position of clarity could rapidly close any perceived gap. The narrative of dysfunction may be priced in while the actual strategic pivot goes underappreciated.

DEFENSE

Rory partially defends OpenAI by noting the Sora kill was smart resource allocation in a compute-scarce world, and that the ad strategy is existential and early rather than failed. He argues the narrative is 'overdone on both sides.' However, the defense is incomplete — they never model the scenario where OpenAI's refocused strategy actually outperforms Anthropic's broader approach.

Rory partially defends OpenAI by noting the Sora kill was smart resource allocation in a compute-scarce world, and that the ad strategy is existential and early rather than failed. He argues the narrative is 'overdone on both sides.' However, the defense is incomplete — they never model the scenario where OpenAI's refocused strategy actually outperforms Anthropic's broader approach.

//

RISK 02

RISK 02

Anthropic's Revenue Quality and Sustainability Are Unproven at Scale

Anthropic's Revenue Quality and Sustainability Are Unproven at Scale

THESIS

The thesis celebrates Anthropic's explosive growth (6 billion annualized in February, fastest-growing enterprise company ever). However, multiple speakers raise fundamental concerns about AI revenue quality that apply equally or more to Anthropic: tokens are being resold and double/triple counted across the value chain (Anthropic to AWS to Cursor to end users), Anthropic reports gross revenue including partner pass-throughs while OpenAI reports net, and the entire industry is operating at near-zero or negative gross margins. The speakers explicitly note that this revenue layering only works 'until we all have to get profitable.' If compute costs don't decline fast enough or if downstream resellers rationalize their spending, Anthropic's headline ARR could prove significantly overstated relative to actual economic value created. Additionally, Anthropic's model of selling fear to CISOs via Mythos — giving them early access to create urgency — is clever marketing but not proof of durable enterprise demand.

The thesis celebrates Anthropic's explosive growth (6 billion annualized in February, fastest-growing enterprise company ever). However, multiple speakers raise fundamental concerns about AI revenue quality that apply equally or more to Anthropic: tokens are being resold and double/triple counted across the value chain (Anthropic to AWS to Cursor to end users), Anthropic reports gross revenue including partner pass-throughs while OpenAI reports net, and the entire industry is operating at near-zero or negative gross margins. The speakers explicitly note that this revenue layering only works 'until we all have to get profitable.' If compute costs don't decline fast enough or if downstream resellers rationalize their spending, Anthropic's headline ARR could prove significantly overstated relative to actual economic value created. Additionally, Anthropic's model of selling fear to CISOs via Mythos — giving them early access to create urgency — is clever marketing but not proof of durable enterprise demand.

DEFENSE

While the speakers raise the revenue quality issue generically for the AI industry, they never explicitly apply the same skepticism to Anthropic's 6 billion run rate that they apply to emergent labs or other startups. The gross vs. net revenue accounting difference is mentioned but not quantified — no one estimates what Anthropic's net revenue actually is after AWS takes its cut. The celebratory framing of Anthropic's growth exists in tension with their own warnings about suspect ARR, but they never reconcile the two.

While the speakers raise the revenue quality issue generically for the AI industry, they never explicitly apply the same skepticism to Anthropic's 6 billion run rate that they apply to emergent labs or other startups. The gross vs. net revenue accounting difference is mentioned but not quantified — no one estimates what Anthropic's net revenue actually is after AWS takes its cut. The celebratory framing of Anthropic's growth exists in tension with their own warnings about suspect ARR, but they never reconcile the two.

//

RISK 03

RISK 03

Anthropic's Mythos Leak Reveals Organizational Vulnerability That Undermines the Safety-First Brand

Anthropic's Mythos Leak Reveals Organizational Vulnerability That Undermines the Safety-First Brand

THESIS

Anthropic's entire brand differentiation rests on being the 'responsible AI' company with disciplined leadership versus OpenAI's chaos. The Mythos leak — where a 10 trillion parameter model's details were accidentally exposed due to human error in content staging — directly contradicts this positioning. The speakers note the acute irony: a model designed for cybersecurity excellence was compromised by basic content management failures. More critically, if Anthropic cannot secure its own crown jewels (unreleased model specifications), enterprise customers and government agencies have legitimate reason to question whether Anthropic can be trusted with their most sensitive data and workflows. Jason explicitly warns that as agentic AI accelerates, these leaks will become more frequent and severe. This risk is compounded by Anthropic's strategy of selling security-as-a-feature to CISOs — a credibility gap between 'trust us with your security' and 'we leaked our own most important asset' could become a significant enterprise sales headwind.

Anthropic's entire brand differentiation rests on being the 'responsible AI' company with disciplined leadership versus OpenAI's chaos. The Mythos leak — where a 10 trillion parameter model's details were accidentally exposed due to human error in content staging — directly contradicts this positioning. The speakers note the acute irony: a model designed for cybersecurity excellence was compromised by basic content management failures. More critically, if Anthropic cannot secure its own crown jewels (unreleased model specifications), enterprise customers and government agencies have legitimate reason to question whether Anthropic can be trusted with their most sensitive data and workflows. Jason explicitly warns that as agentic AI accelerates, these leaks will become more frequent and severe. This risk is compounded by Anthropic's strategy of selling security-as-a-feature to CISOs — a credibility gap between 'trust us with your security' and 'we leaked our own most important asset' could become a significant enterprise sales headwind.

DEFENSE

The speakers partially defend this by normalizing the leak as an industry-wide phenomenon — insecure GitHubs, open Supabases, the Mercari breach — and arguing that velocity of shipping inherently increases error rates. They frame it as an inevitable tradeoff of speed. However, the defense weakens Anthropic's core brand promise rather than strengthening it, and they acknowledge 'it probably was human error' without grappling with the reputational asymmetry: a company that markets itself on safety and responsibility faces far greater brand damage from such errors than a company that never made those claims.

The speakers partially defend this by normalizing the leak as an industry-wide phenomenon — insecure GitHubs, open Supabases, the Mercari breach — and arguing that velocity of shipping inherently increases error rates. They frame it as an inevitable tradeoff of speed. However, the defense weakens Anthropic's core brand promise rather than strengthening it, and they acknowledge 'it probably was human error' without grappling with the reputational asymmetry: a company that markets itself on safety and responsibility faces far greater brand damage from such errors than a company that never made those claims.

//

ASYMMETRIC SKEW

The downside is materially underappreciated relative to the upside consensus. Upside relies on Anthropic maintaining its growth trajectory, achieving durable enterprise penetration, and justifying its valuation through eventual profitability — all while operating at negative margins in a compute-scarce environment with inflated headline revenue. Downside scenarios include revenue compression of 40-50% when net revenue accounting is enforced, rapid competitive convergence from Google/Meta/xAI reducing pricing power, and a potential credibility crisis if security or safety incidents escalate. The skew is roughly 1:1 or worse — the probability-weighted downside (revenue restatement, margin pressure, competitive commoditization) is comparable in magnitude to the upside (continued hypergrowth into market leadership), but the market is pricing almost exclusively for the upside scenario. This is a crowded trade with asymmetric information disadvantage for outside investors who cannot verify the true unit economics.

ALPHA

NOISE

The Consensus

The market believes OpenAI maintains its leadership position in AI despite organizational turbulence, that cybersecurity companies face existential disruption from frontier AI models capable of detecting threats (evidenced by 6-9% selloffs on Anthropic's Mythos leak), that current AI company valuations and revenue figures are broadly trustworthy indicators of durable business value, and that California/Washington remain viable headquarters for tech wealth creation despite increasing tax burdens. The consensus also holds that consumer AI revenue (subscriptions, ads) represents a viable path to justifying current valuations.

The market's logic chain: Frontier AI models will subsume security functions → cybersecurity incumbents lose moats → sell. OpenAI's massive user base (500M+ uniques) and brand create an unassailable consumer position → valuation justified. High ARR figures reported by AI startups reflect genuine product-market fit and durable demand. Venture capital value-add justifies carried interest and access premiums. State tax increases on the wealthy will generate proportional revenue increases.

SIGNAL

The Variant

The speakers collectively believe: (1) Anthropic is structurally outexecuting OpenAI due to leadership stability and strategic coherence versus OpenAI's chronic senior team dysfunction and strategic wandering (killing Sora, late pivot to coding/enterprise). (2) The cybersecurity selloff is precisely backwards — the agentic era represents a golden age for security companies because threats are exploding orders of magnitude faster than defenses, and every CISO will take any meeting about agentic security. (3) AI revenue metrics across the ecosystem are substantially gamed — tokens are being resold and re-recognized multiple times across the value chain, free trial conversions are being recognized as ARR, and tranched rounds create misleading valuation headlines. The real reckoning comes when profitability is required. (4) The geographic exodus of ultra-high-net-worth founders and investors from California is accelerating and will cause net revenue losses for the state, not gains, as the proposed wealth taxes assume collections from people who have already left. (5) The China-Singapore-US pipeline for AI talent acquisitions is functionally dead after the Manis founder detention.

The speakers' counter-logic: (1) On cybersecurity: The causal arrow is reversed. More AI agents = more attack surface = more security spend, not less. The market is conflating 'AI can detect threats' with 'AI eliminates the need for security companies,' when the correct inference is that agentic proliferation creates exponentially more vulnerabilities that require both new and existing security solutions. (2) On OpenAI vs Anthropic: Leadership coherence is the primary determinant of execution speed in AI, not model capability alone. OpenAI's CEO has spent a disproportionate share of time managing interpersonal dynamics rather than strategic execution, which is the highest warning signal a board can observe. Anthropic's consistent founder-led structure will compound into widening execution advantages. (3) On revenue quality: The entire AI revenue ecosystem has a double/triple-counting problem — the same tokens flow through foundation model providers, middleware platforms, and end-user applications, each recognizing revenue on the same underlying compute. This creates the illusion of a much larger TAM than actually exists, and it unravels when any layer demands profitability. (4) On tax policy: Ultra-high-net-worth individuals have high mobility and discrete liquidity events. A 13% state tax on a $2 billion one-time gain creates a $260 million incentive to relocate for 165 days — the tax itself changes the behavior it assumes is static, resulting in net revenue loss. (5) On China deals: The Manis outcome demonstrates that redomiciling does not sever sovereign claims on talent, making risk-adjusted returns on similar deals unacceptable going forward.

SOURCE OF THE EDGE

The speakers' edge derives from three sources of varying credibility: (1) Operating experience — Jason Lemkin has direct experience as a founder, operator, and active early-stage investor who personally tests products (he ran a structured evaluation of Emergent Labs against competitors) and has pattern-matched across hundreds of portfolio company revenue presentations. His observation about the PLG free-trial-to-ARR recognition trick and the token re-recognition chain is grounded in repeated firsthand exposure to the mechanics of how startups game metrics. This is a genuine informational advantage — most market participants consuming headline ARR numbers have not sat in the board meetings where these numbers are constructed. (2) Institutional investing perspective — Rory (likely a growth/crossover investor) brings structural knowledge of how tranched rounds work, how cybersecurity subsectors differ, and how leverage ratios translate to blowup risk (his SoftBank analysis). His breakdown of the cybersecurity selloff — distinguishing application security from perimeter defense from authentication — reflects genuine domain expertise that the market's indiscriminate 6-9% selloff clearly lacked. (3) Narrative construction — Some claims are less empirically grounded. The Anthropic-vs-OpenAI framing, while directionally supported by public evidence, leans heavily on a 'good management vs bad management' narrative that is unfalsifiable in real time. The speakers acknowledge they don't have inside information on either company's actual execution metrics. The Wall Street Journal article on Dario's departure provides color but not current operating data. Similarly, the California exodus thesis, while logically sound, is presented with more conviction than the available data strictly supports — the speakers cite individual anecdotes (Jervason, Schultz) rather than aggregate migration data. Overall assessment: The edge on revenue quality and metric gaming is real and structurally grounded. The edge on cybersecurity being mispriced is credible and well-reasoned. The OpenAI organizational dysfunction thesis is plausible but relies on narrative pattern-matching rather than proprietary information. The macro tax/migration thesis is directionally logical but presented with more certainty than warranted.

//

CONVICTION DETECTED

• We may be at the stage where we throw the humans under the bus, not the AI anymore, which I think at some level is pretty terrifying • I think shooting in the head is even more significant • A big part of the whole strategic direction of the company was flawed • This is one where it's just backass barkwards • This is a golden age of security • This should be the golden age for new and existing investors because the threats are terrifying • Everyone should be benefiting when you see an explosion in application production • I don't believe there's right or wrong in money. There's just money. • Let me be direct. Get the over it. • Being mean to a billionaire is actually a feature • You should conform your company around your customers and your model, not your VCs • I just don't think raising it 5 or 8 billion when you're at 80 million or 100 million of suspect AR is the most exciting accomplishment in the world • You haven't lived till you've seen an 85% decline in an index • The Singapore washing thing is over. It's over. • I think this whole Singapore washing thing is over • We're not the stars in the drama. We're bit players who get well paid for our part • No one yet knows the exact approach... but everyone 100% understands that this is an emerging mega threat • Actions of consequences

//

HEDGE DETECTED

• I'm not saying this is what emerges, but a lot of startups will instantly recognize that as $240 in ARR • I'm not going to say that publicly because I don't want Sam to break my balls • Maybe I'm wrong • I'm not smart enough to know • I don't actually know what happened • It's hard for me to know the criticism • I would imagine if venture had access to more debt, we'd all load up on it • I'm not in the boardroom • I'm going to avoid any specifics genuine comment here because it's not appropriate • I don't have a ton of information so I'm not going to breach any confidentialities • I only have a tiny bit of information • Narrative is overdone on both sides • I'm not going to comment on numbers at all • That may well be true. I don't want to debate that. This is not political. • Hopefully it turns out to something like that • I'm doing this on the fly The ratio of conviction to hedging reveals a pattern of high-conviction thematic claims paired with careful hedging on specific facts and individual companies. The speakers are genuinely certain about structural dynamics (cybersecurity mispricing, revenue gaming, agentic threat expansion, California exodus logic) but hedge appropriately when they lack direct information (specific company revenues, board-level decisions, deal specifics where they have disclosure constraints). This is the pattern of experienced operators and investors who distinguish between what they know structurally versus what they know specifically. The hedging does not undermine the core theses — it actually increases credibility because the speakers are not performing omniscience. Weight should be placed most heavily on the structural claims (cybersecurity golden age, revenue quality skepticism, organizational dysfunction costs) and more lightly on specific predictions about individual company trajectories.