All articles
'Lazy GTM Design Wearing An AI Costume' Is No Replacement For Expert Commercial Judgment
Poonam L, a GTM and revenue growth leader, discusses how AI can amplify lazy GTM practices that lead to poor engagement and advocates for marketers to use it to amplify their own expert judgment instead.

The problem is a lazy GTM design wearing an AI costume. Teams optimize for activity rather than impact, and buyers feel it instantly.
Marketing teams have never had more data. And yet, outbound quality is arguably worse than it was five years ago.
There's a disconnect here where access to data can be mistaken for understanding the customer. Marketers end up pulling superficial traits and trivia about prospects instead of using AI to synthesize signals and recognize patterns. GTM programs end up optimizing for activity rather than any genuine buying context.
We spoke with Poonam L., a GTM and revenue growth leader with over 20 years of experience in B2B tech. She has influenced more than $500 million in pipeline and led cloud sales growth across the JAPAC region, with a strong emphasis on building partnerships and leading wide-ranging marketing campaigns. With that kind of experience, she's more than aware that the problems described above don't just lie at the feet of AI.
"The problem is a lazy GTM design wearing an AI costume," Poonam says. "Teams optimize for activity rather than impact, and buyers feel it instantly."
Corporate cosplay in an AI costume
Personalization at scale has never been easier, but dropping a job title into a subject line misses the context needed to earn attention. It's not enough to scrape a LinkedIn page and mention their alma mater, but in many cases, automation seems to bring out that approach in otherwise talented GTM experts.
For Poonam, that misunderstanding points to a deeper issue with how automation is deployed. It can either support human judgment or override it, and if it is the latter, then sales teams will most likely find themselves scaling losing sales strategies. That approach helps unlock profitable B2B growth by synthesizing internal account pressure before a single message is drafted.
As more agentic sales orchestration platforms enter the market, the real opportunity lies in using AI as a pattern-recognition engine. The real test, she argues, is whether automation makes the interaction more relevant, timely, and useful for the buyer. If not, she says that the only thing accomplished is that it "just makes the seller more efficient at being ignored."
From decorative stalking to pattern recognition
Real buying context, Poonam explains, answers a different set of questions, like "what is changing in this account?" or "what pressure the buyer is likely facing, why does that matter now, and why does this buyer care?"
The starting point is to build a framework for processing dense corporate signals to connect the dots in a meaningful way. Identifying the real outbound angle means uncovering leading indicators of internal account pressure, not just lagging indicators of intent. AI can help assemble those signals, but humans still have to interpret them in what's increasingly recognized as a human-AI hybrid model.
To identify this context, Poonam looks at signals such as hiring patterns and job descriptions. By using a modular AI stack to pull these threads together, teams can see where a company is investing, where it may be struggling to scale, and what might be changing internally. The output, in her view, should not read like a LinkedIn scrape. "It should be closer to: 'Your company seems to be expanding X while trying to reduce Y. That usually creates pressure on Z. Is that already on your radar?' That earns attention."
Linking those patterns to customer friction and expansion plans allows sellers to move from raw data to a business hypothesis about the specific friction a buyer may be facing today. Look closely at job descriptions, leadership changes, earnings calls, and customer reviews. Job postings often spell out transformation priorities and tools. Leadership changes can bring new mandates and budgets. Earnings commentary reveals what executives are publicly prioritizing, while customer reviews surface operational friction.
On their own, none of these guarantees intent. Together, they give a clearer view of where pressure is building and where a relevant outbound angle might exist.
Rewriting the management conversation with grown-up metrics
None of this strategy matters if management is still just counting dials. Poonam notes that simply urging people to "be more strategic" tends to become corporate wallpaper. In her experience, meaningful changes tend to start in the management conversation. Instead of focusing reviews on volume, she suggests that managers push for the reasoning behind each action.
"Volume tells you whether people are busy. It does not tell you whether the motion is working," she says. "Instead of only asking how many emails went out, managers should ask what makes us believe this buyer should care. That one question changes the quality of thinking immediately."
Executing that approach also means revisiting what counts on the scoreboard. Research such as Salesforce's State of Sales shows that when success is defined mainly by emails sent or dials made, teams are more likely to reward brute-force motion than meaningful progress. Leaders looking to build a stronger data culture often find success by rebalancing what they track and how they review it.
She recommends prioritizing positive reply rates, meeting-to-opportunity conversion, account engagement depth, pipeline from target accounts, stage progression, and sales-accepted opportunities. The goal is to move the standard question from "How much activity did we generate?" to "What actually changed because of it?"
As Poonam puts it, "Measure positive replies, quality conversations, opportunity conversion, and progression in target accounts. The question should shift from how much activity was generated to what actually changed as a result. That is the grown-up metric." To support that framework, she suggests defining "good outbound" in concrete terms: a relevant trigger, a likely business problem, a reason to engage now, a clear reason the buyer should care, and a low-friction next step.
And most importantly, managers should review the hypothesis behind an outreach, not just the wording of the email itself. "Use AI for research and reasoning, not just copywriting," she says. "The email is the last mile. The thinking comes first."
The human advantage in 2026
Fast forward to the rest of 2026. When every revenue team has access to similar AI capabilities and data, success depends more on how those tools are applied than on which specific platform a team has chosen. Gartner's research on the future of sales reinforces that the winning teams will be the ones reorganizing around buyer context rather than chasing tooling parity.
The advantage, Poonam argues, comes from sharper account selection, better timing, genuine business context, and "the one thing AI cannot replicate: commercial judgment." Machines handle the exhaustive research and pattern assembly, while humans decide where to focus and how to engage.
For teams willing to adopt that approach, the tools are already on the table. The distinction will be less about who has the most sophisticated stack and more about who consistently does the thinking before they press send. "Great outbound in 2026 will feel less like a campaign and more like a well-timed business conversation. Honestly, that has always been the job. AI just removes the excuses."





.webp)