$30–40B Spent. Few Wins. Here’s the Fix.

Practical GTM plays that scale agent pilots into revenue + FREE giveaway for 3 readers.

The GenAI Divide — What It Is And Why It Matters For GTM

Quick note — I’m building three GTM workflows for free for the first qualified readers who reply “AGENT READY.” Details at the end.

Last week I argued agents are worth investing in — they’re delivering real productivity gains when used the right way. This week’s MIT Project NANDA report explains why the wins aren’t showing up everywhere.

MIT’s Project NANDA ran a multi-method review of AI in business: 300+ public AI initiatives, structured interviews with 52 organisations, and surveys of 153 senior leaders at industry conferences (Jan–Jun 2025).

The headline is blunt: lots of pilots, big budgets, tiny scaled wins. The GenAI Divide is real — a small group is moving experiments into production and measurable value, while most teams are stalled at the pilot stage.

This isn’t a model problem. It’s an operating problem. Agents amplify whatever you feed them:

  • bad data → bad advice;

  • brittle workflows → broken automation;

  • systems that don’t learn → repeat mistakes.

When that happens, users lose trust, pilots stall, and boards back off.

The report states:

Most pilots fail due to brittle workflows, lack of contextual learning, and misalignment with day-to-day operations.

In plain terms:

  • Brittle workflows = the automation follows a recipe that breaks if one ingredient is missing.

  • Lack of contextual learning = the system doesn’t learn from real outcomes, so it repeats the same mistakes.

  • Misalignment with day-to-day operations = the tool doesn’t match how people actually work, so they ignore it.

Before diving into the GenAI divide here are major AI moves that took place over the last week:

  • Google rolls out big Gemini Live upgrades: visual guidance via your camera, deeper app integrations, and more natural speech; paired with Pixel 10 launches.

  • Anthropic bundles Claude Code into Team/Enterprise with new admin controls; more enterprise push after its US gov’t $1 access move last week.

  • Meta continues internal AI reshuffles ahead of broader Llama roadmap.

  • Excel: beta “COPILOT” function to generate formulas from plain English.

Top Barriers To Scaling AI in The Enterprise

The report finds the same recurring problem: it’s not the models that are failing, it’s the systems around them. Teams trust flexible consumer tools like ChatGPT for quick tasks, but when businesses try to deploy custom tools into real workflows, the tools don’t adapt, they don’t retain context, and users stop using them.

Below are the top barriers.

How The Best Builders Succeed

The teams that make it across the GenAI Divide don’t build bigger feature lists — they build systems that learn. Their pattern is consistent:

  • Start narrow, win big. Successful builders pick one tight, high-value workflow and solve it end-to-end before expanding.

  • Embed into real work. The winner’s products don’t live in a separate app — they appear in the tools reps already use and act where decisions are made (CRM, Slack, inbox).

  • Design for learning and memory. Top performers keep context, capture human corrections, and feed outcomes back into the system so the models get better with every interaction.

  • Customise to the workflow, not the UI. Domain fluency (knowing the sales motions, objections, and decision signals) beats a flashy interface every time.

  • Measure early & visibly. The best pilots turn into org-level priorities because they move a board-level metric quickly (meetings, pipeline velocity, CAC).

In short: winners treat agentic AI as an operating-system problem — small, repeatable wins stitched into the flow of work, with continuous feedback loops that make agents smarter and more trusted.

How To Make GTM Pilots Successful

If you want PEG (Profitable, Efficient Growth), focus your pilots where they move unit economics fast. Pick one play, one ICP, one owner, one KPI — run 30 days.

Play A — Enrich & Qualify (signal → fit)

Why: Clean slate so reps only work good leads.
KPI: % of enriched leads → qualified (vs baseline); time-to-qualify.
How: automatic enrichment → ICP scoring → only push high-confidence leads to reps; low-confidence → Human SDR validation.
30-day starter: 500 recent inbound leads; target >50% automation coverage; track time saved and qualified rate lift.

Play B — Meeting Prep & Win Plan (context → conversion)

Why: Prepared reps close faster and make better use of every demo.
KPI: meeting → opportunity conversion lift.
How: one-page pre-call brief: 3 signals, 1-line value opener, 2 likely objections + suggested responses, next-best-action. Deliver to Slack/CRM before call.
30-day starter: enable for 3 AEs; A/B test prepped vs un-prepped meetings.

Play C — Signal-Driven Outbound & Prioritisation (sales only)

Why: Turns noisy outbound into targeted outreach that converts.
KPI: meetings booked per outreach; touches per booked meeting (reduction).
How: agents watch funding, hires, product launches, intent page hits; score accounts; draft 1-sentence personalised opens; launch a light 3-touch sequence; AE approves first handoffs.
30-day starter: 3 signals, 100 accounts, measure reply→meeting rates.

Simple rule: get these three pilots right before anything more complex.

Want Help Running This?

Cabot Insights will build one of the GTM workflows (Enrich & Qualify, Meeting Prep, or Signal-Driven Outbound) for FREE for the first 3 readers who reply with “AGENT READY.”


What you get: a scoping call, the Clay conditional flow (or equivalent orchestration) configured to your ICP, and a short handover so your reps can pilot immediately. Limited spots and T&Cs apply.

P.S. for those wondering: Cabot Insights blends autonomous AI agents with proven outbound playbooks—helping sales teams triple pipeline velocity, slash manual work and scale smarter, not headcount. Ready to 3× your pipeline without hiring more reps?

Stephen 

Reply

or to participate.