Last week my friend texted me: “holy sh*t, we need to run and hide in the woods. The AI agents are taking over.”  That text pretty much captures the fear-mongering side of the current agent hype cycle surrounding Moltbook + OpenClaw/Clawdbot, outlined here by AI expert Allie Miller

Put aside the Terminator doom and gloom, let’s focus on impact to enterprise sales.  Two weeks ago, Sequoia put forward their PLG → Agent-Led Growth thesis. The idea is seductive: instead of selling to humans, you sell to agents (or agents sell to each other). Agents discover, evaluate, choose, and deploy software autonomously. The best product wins. Distribution collapses into technical merit alone.

This article dissects that thesis - what’s right, what’s wrong, and what to do about it.

PART 1: Agent-Led Growth is Here

In late 2022, I was a Founding GTM at Metronome (now part of Stripe). We were triaging inbound like any early-stage company does, and I made a habit of asking a simple question: “How did you hear about us?” More and more, the answer was the same: “ChatGPT.” It was an early signal that product discovery was changing before our eyes. 

There’s a growing belief that agents will autonomously choose, buy, and deploy enterprise software. That will be true in some worlds: solopreneurs, early-stage startups, high-velocity teams, and low-risk tools. Jason Lemkin’s episode “We replaced our sales team with 20 AI agents” on Lenny’s Podcast is a great example.  

Agents are incredible at the parts of the funnel humans are bad at. They can scan documentation. Compare features. Read reviews (*which have human bias). Sanity-check APIs. Cross-reference pricing pages. They can do in minutes what would take a human weeks - and do it without fatigue or bias toward shiny PLG-optimized landing pages & Easter eggs.  

There’s something refreshing about this.  In early 2025, I’d ask a developer why he chose Cursor over Windsurf in his post-POC survey and he’d say, “I like the design” or “it feels more clean.”  In many cases, what he was really saying is “I use this in my personal life because my friends told me about it and I’m used to this so I like it better at work.”  As an AE, this was incredibly frustrating.  But it was an important lesson in the importance of PLG virality (side note: Windsurf got a major redesign in 2H25 and our self-serve growth numbers are skyrocketing).  

Agents will shortlist vendors faster and with more datapoints than any human ever could - if your docs are vague, your APIs are messy, or your pricing is impossible to parse, you simply don’t make the list. 

PART 2: Where Agent-Led Growth Fails 

I saw this narrative before. In 2014 when I was working in the ETF industry.  Betterment, Wealthfront, and robo-advisors were going to kill Financial Advisors.  Twelve years later, in 2026, the opposite happened - human Advisors are thriving.  The millennials who poured billions into robo-advisors in their 20s are now in their 30s and 40s, in higher tax brackets, more complex situations, and want hands-on support.  Humans at large enterprises will behave similarly - we can’t help ourselves.

Cutting edge AI players are betting on this.  Anthropic, for example, is aggressively investing in GTM: As of 12/31/25, headcount in GTM is up 84% over the 6 months and GTM job openings have increased by 161% over 3 months.  A week ago, they hired half of Salesforce’s GTM leadership team.

Some GTM naysayers believe that Anthropic is taking a Mercor approach (see Time Magazine’s feature), studying their GTM team’s behavior to find signals to automate with AI.  And yes, they are absolutely doing this.

But, here’s were the Sequoia thesis breaks and why Anthropic is betting on enterprise sales.  

  • Agents don’t own risk, budget, or board-level accountability

  • Enterprise buying is still political, emotional, and risk-weighted

  • Agents can parse docs but can’t feel 3AM outages

  • Enterprise tech stacks are messy and rarely does enterprise technology work out of the box without some customization to org requirements (hence the rise of the “Deployed Engineer”)

  • Agents don’t get “fired” or have to explain a bad decision to their partner and kids (despite what Moltbook threads you read last night).  

I saw this play out recently at Cognition.  I was working with a Fortune 500 enterprise that built internal coding agents.  The buying team vetoed an external evaluation because of politics - Devin competed with their internal project, which they had spent months promoting across thousands of developers.

There’s also a quieter assumption embedded in the agent-led growth narrative: LLMs will objectively recommend the best tools. 

The AI labs are already running ads, and this feels a lot like Google in the early days, when “organic” search was pure for about five minutes. As AI Lab losses compound and pressure to monetize profitably increases, recommendations will get gamed. 

And if you live in the SF tech echo chamber, a slice of the global workforce well under 0.01%, it’s easy to mistake local velocity for global reality.

Part 3: What this Means for You

#1: You need to learn how to use agents in GTM (obviously).  GTMBA will have more hands-on content in the coming weeks.  Also, I recommend following Brendan Short’s “The Signal” as he is writing a lot about this topic.  

#2: Product must be legible to machines and defensible to humans. Agents demand clean docs, structured data, and honest APIs, and humans need conviction, proof, and a story they’re willing to defend when things break.

#3: Partnerships and product bundles matter more.  One of my new accounts for 2026 is a massive GCP shop.  Google’s AI products will be “good enough” and bundled.  To break through, we’ll need to leverage trust through distribution partners.  

All said, I don’t agree with Harry Stebbing's recent hot take.

But he does have a point: the best-distributed agent wins 

Cheers,

Julian @ GTMBA

Opinions do not reflect those of Cognition and are solely my own.  

Recommended for you