Latest News

All articles

AI Theater Gives Way To Real GTM Leverage When Leaders Engineer The Stack Beneath It

May 8, 2026

Suave Przywalny, Enterprise Architect of Marketing Technologies at one of the largest U.S. telecommunications providers, argues that without a sober assessment of needs before AI adoption, organizations end up with fragmented tools and hallucinating chatbots.

Credit: The Revenue Wire

The problem organizations have with AI is not a tooling problem. It's an architectural problem dressed up to look like it needs a tooling solution.

Suave Przywalny

Enterprise Architect of Marketing Technologies

Telecommunications

AI won’t fix a broken go-to-market system just because someone gives it autonomy. Too many teams are rushing agents into GTM stacks built on fragmented data, shaky handoffs, and legacy workflows, then mistaking motion for transformation. As AI adoption matures and vendor price tags climb, the bolt-on approach is starting to look less like innovation and more like expensive theater: leaders push for automation, employees route around the mess, and the underlying architecture remains untouched.

Suave Przywalny, Enterprise Architect of Marketing Technologies at a Fortune 500 telecommunications provider and Founder of the consultancy INNOVI, works on the kinds of marketing stacks where these decisions play out. In a recent piece on go-to-market consolidation, he noted that buying new tools won't get organizations very far in the new AI landscape, and it certainly won't solve major problems. "The problem organizations have with AI is not a tooling problem," Przywalny says. "It's an architectural problem dressed up to look like it needs a tooling solution."

The shiny new thing

Look inside many enterprises today, and you will find employees quietly using general-purpose models to hack their daily workflows. When architectural input is thin, and sales and marketing ops alignment is even thinner, AI often ends up sitting on top of the same fragmented data that created problems in earlier MarTech waves. As Przywalny describes it, "There's not a rigorous process around it. It's people thinking, 'here's a great idea, let's run with it, see how quickly we can get there first.'"

The vendor landscape reinforces the urgency. Przywalny recalls a recent encounter that captured the dynamic: "Even our last CDP vendor told us that they were actually an AI company, and CDP was just one of their offerings. It's the shiny new thing. If you don't have it and you're not first with it, you're behind."

The four layers that actually matter

That mindset amplifies a massive, long-standing issue: tech debt. Costly, yes, and operationally, a pile of legacy tech can leave employees with messy dashboards. But now, tech debt can translate into autonomous agents making decisions based on bad data. To avoid that, Przywalny recommends that organizations that achieve more durable results tend to build a proper stack with four distinct layers: a data foundation, one or more machine-learning decisioning engines, a governance framework, and an agent interface. Skip any of these layers, and you are essentially handing the keys to an autonomous agent and asking it to guess.

Deploying these agents onto weak foundations without a clear scope introduces new architectural risks. "Every disconnected system creates a data gap," he says. "Every data gap creates an inference gap. And then the AI agent running on that inference gap just becomes an expensive chatbot with a bigger blast radius."

The risk goes beyond bad outputs. Przywalny warns that autonomous agents with system-level permissions can behave like unmonitored insiders. "You're putting something in place that has autonomy and agency, allowing it to run as a node with system-level permissions on a corporate machine," he says. "It has the ability to circumnavigate security protocols to satisfy its user, but along the way, it's causing unseen damage."

When AI becomes a magnifying glass

The operational antidote to that risk starts at the data foundation. If the source-of-truth layer is incomplete, inconsistent, or poorly governed, those errors frequently compound as they move up into the AI layer. For many architects, building proper foundational layers underpins accurate reporting and operational figures.

Przywalny notes that AI tends to act as a magnifying glass on that foundation. It surfaces the messiness and fragmentation that already exist in revenue operations. And while AI-generated content feels productive in real time, the lack of verification creates a deceptive form of progress. "Internally, you have a tool that answers questions instantly, producing so much content that you don't have someone sitting there verifying everything," Przywalny says. "As that conversation becomes longer, more of that drift takes place. If it's not a disciplined team with checkpoints, you get into a quasi-intelligence bubble where everything makes sense to your close circle."

Dialing for disaster in lead generation

That dynamic plays out in practice when agents are plugged directly into standard workflows, such as AI lead generation. If an outbound telemarketing director asks a natural language agent for a list of prospects who are not on a Do Not Call list, not known litigators, qualified for a given product, and have not spoken with a sales representative in two months, the agent will infer and execute that intent entirely off the underlying metadata.

When taxonomies are loose or fields are mis-tagged, the system can produce lists that look statistically strong on paper but are directionally wrong in practice. "That's very exciting to an under-the-gun VP of Sales who needs outbound to move the needle," Przywalny says. "They might say, 'We have twice the leads, so we'll get 35% more sales this month.' But if the contact information or underlying data is incorrect, you lose potential opportunities. The person on the other end loses trust in the brand, and you actually reduce your overall monthly gain."

In those scenarios, standard safeguards are often less exotic than the tooling suggests: tightening data definitions, validating segments before they reach sales, and maintaining a strict human-AI hybrid model to review AI-generated lists as drafts rather than finished products. "There are many parts in that stack that can fail," Przywalny says. "It's all about rigor, putting the correct process in place, and getting humans in the loop exactly where they need to be. It all starts with thoughtful system architecture. Most of the time, those architects aren't in the room when these ideas are ideated."

Architects, talent, and the next 12 months

Looking ahead to the next six to 12 months, one way companies are moving beyond AI theater is by rethinking how architecture, talent, and governance come together. Leaders tend to find more success when they bring engineers and systems architects directly into commercial groups, so the way people buy tools reflects how the stack actually works. Przywalny also notes a quieter trend: companies are beginning to subsidize internal AI token usage for their staff, recognizing that employees are already using AI either way. The choice is whether that usage happens inside a guard-railed environment that protects IP and compounds organizational learning, or outside the company's view entirely.

Beyond internal restructuring, teams are working with external labs and specialists to build vertical, applied AI systems instead of relying on a single, one-size-fits-all platform. Working with external experts helps teams build custom, vendor-agnostic systems tuned to the specific mathematics of their business.

For Przywalny, the leaders who pull ahead in the next phase will be the ones who stop treating AI as a product to procure and start treating it as a stack to engineer. "It won't be a one-system solution," he says. "They'll build the exact stack needed for their specific GTM decision frameworks. Seeing that specific focus on underlying architecture, talent, and external SMEs will be the big move forward."