Latest News

All articles

The AI Productivity Trap: Why Automation Is Making Revenue Teams Work More, Not Less

April 16, 2026

Fintech executive Safwan Zaheer explains how to avoid pipeline leakage by using AI to iterate on small, individual workflows before scaling automation across the revenue team.

Credit: The Revenue Wire

Key Points

  • Rather than accelerating growth, poorly governed AI agents often shift bottlenecks downstream, forcing sales teams to spend hours on manual quality assurance that steals time from high-value prospect interactions.

  • Fintech executive Safwan Zaheer says most organizations lack a clear ROI framework for AI, leaving them vulnerable to CFO scrutiny as token costs, infrastructure, and licensing fees continue to mount.

  • He advocates for adopting a solve-for-yourself approach to AI before attempting to scale agents across a department in order to build a more efficient, less costly sales machine.

As teams increasingly add AI tools and agents, the level of human involvement actually increases. You have to review outputs, assess quality, and decide if anything is usable.

Safwan Zaheer

Fintech Executive

AI was promised as the ultimate liberator of human time, but for many revenue teams, plugging in new automation tools has actually created a new bottleneck. Instead of clearing plates, these systems often pile on more work by forcing sales and GTM teams into endless loops of checking and correcting outputs. For managers looking to build sustainable productivity, success now relies on setting boundaries, figuring out exactly which tasks to leave alone, and testing small, low-risk workflows before trying to rewire an entire revenue engine.

Safwan Zaheer has been watching executives fall into the endless-productivity trap. He builds, transforms, and scales the platforms that are shaping the future of banking infrastructure for a living. He believes that as organizations reach for agentic scale, they're ignoring the hidden costs of human oversight, essentially trading low-value manual tasks for higher-value, higher-stakes governance burdens.

"As teams increasingly add AI tools and agents, the level of human involvement actually increases. You have to review outputs, assess quality, and decide if anything is usable. It becomes an expanding problem," he says. He points out that even the best AI agents still require some human involvement. "As you scale the tooling, that involvement compounds." The dynamic is counterintuitive but increasingly common: a team deploys an AI agent to handle outreach or draft initial communications, expecting to free up capacity. Instead, the volume of output requiring human validation grows, and so does the time spent reviewing, correcting, and routing that output. In this way, automation moves the bottleneck downstream into quality assurance and decision-making, where the stakes are higher and the cost of errors is real.

  • The wrong question: Zaheer argues that most leaders are starting from the wrong premise entirely. The default question of "What can we automate?" assumes that more automation is inherently better. When your SDRs spend 30% of their day reviewing AI-generated outreach for brand safety or compliance, however, they're actively bleeding pipeline. "The right question is what you choose not to automate," he asserts.

  • The compliance cost: He notes that the risk is particularly acute for revenue teams operating in regulated industries, where a missed flag or an unchecked output can create legal exposure. "When things get over-complicated, small mistakes become costly. You miss a compliance trigger or you lose an opportunity because no one was paying attention, everything was just running on autopilot." Even in unregulated environments, the cost of low-quality automation adds up in pipeline leakage, damaged prospect relationships, and the internal overhead of managing systems that were supposed to manage themselves.

Zaheer's deliberately unglamorous alternative is to start small, start with yourself, and prove the value before scaling to anyone else. "If you're a business leader and you're using an automation tool, the first question is whether you're able to successfully automate the tasks you don't want to do, like scanning email, drafting initial responses, and updating CRMs," he explains. "If you're using that time to do more meaningful work like talking to customers or supporting your team, that's a success."

  • The discipline of scale: The practical path he outlines for revenue leaders is narrow by design. The goal of automating predictive, repetitive workflows is to build confidence incrementally, reaching a point where the human checks feel unnecessary because the system has been validated through sustained use, not because someone decided to skip them. "You want to get so comfortable and confident with it that over time, you don't even feel the need to be involved in the reviews. That takes iteration. You can't get there by deploying ten agents on day one," Zaheer says.

  • Muddled measurement: Even for leaders who adopt AI with discipline, a persistent tension is that no one has cleanly solved how to measure the ROI. Zaheer sees CFOs beginning to ask harder questions as token costs, infrastructure spend, and licensing fees hit the business line, and most teams don't have clear answers. "CFOs are now asking leaders, 'How is this all being tracked? How is this being justified?' I don't think anyone has fully solved it yet. As the technology gets more democratized and costs level out, we'll know more. But right now, that's an element most teams are missing."

  • Activity ≠ outcome: The temptation in the absence of clear measurement is to point to activity, like more emails sent, more leads touched, and more content produced. Zaheer warns against using output volume as a proxy for impact. "Some leaders look at whether it's helping them move a metric market share or profitability or customer acquisition, but that's the wrong way to look at AI." In his view, the more honest initial measure is whether the technology is saving time and whether that time is being redirected to higher-value work.

For leaders who still believe in the scale-first approach, the market is already producing warnings. Zaheer points to Klarna, which slashed its customer service teams in favor of AI automation and then had to reverse course and rehire. He cites Block's announcement of sweeping AI-driven headcount reductions as another example of the pattern. "These activities do more harm than help to both the technology itself and to the ecosystem," Zaheer says. "Using AI as a label to cover up organizational decisions, especially ones that impact human capital, is damaging. And in several cases, the companies that moved fastest to replace people with AI are the ones walking it back." The lesson is that treating AI as a blanket replacement rather than a targeted complement produces exactly the kind of operational, financial, and reputational instability that revenue leaders are supposed to prevent. Zaheer says the teams that get it right will resist the pressure to do everything at once, automating the small, repetitive tasks first and validating before they scale. "Solve for yourself before you solve for your company, your team, or your customer," he concludes.