How's that chatbot working out for you?
Thoughts on the great AI theatre and why nothing is changing
Picture a packed conference room at corporate headquarters. The executives nod in rhythm as a slick demo of the new AI pilot project plays on screen. This is the big innovation everyone’s been buzzing about. There’s even a code name and a glossy slide deck. The pilot presentation ends with polite applause. Teams congratulate each other. And then… nothing really happens. The applause fades, the demo is shelved, and employees go back to business as usual. In retrospect, the whole affair starts to feel like an elaborate stage play…innovation theater at its finest.
Live in the Boardroom: Companies across industries have become adept at looking innovative without actually changing much. They launch artificial intelligence pilots with great fanfare, only to watch these projects stall out and quietly disappear. It’s not just one or two flops, either. The vast majority of corporate AI pilot programs never deliver any meaningful impact. A recent MIT report found that a whopping 95% of enterprise AI pilots never translate into significant gains for the business. In other words, almost all these prototype projects fail to scale into anything real. Executives love to tout them in press releases and town halls, but as many weary employees can attest, most of these pilots are full of sound and fury, signifying nothing. The demos stay demos. The “revolutionary” chatbot or machine learning tool remains confined to a slide deck or a sandbox, far away from the company’s actual operations or customers.
Stuck on the Runway
Why do so many AI initiatives get stuck in pilot purgatory? By now, industry veterans can practically recite the pattern by heart. A mix of organizational dynamics and human psychology turns ambitious pilots into mere performance pieces. Some of the common culprits are:
Lack of Follow-Through: Leadership’s enthusiasm for the AI pilot evaporates soon after the demo day. No one champions the project into production. It’s as if the company was more interested in saying they tried AI than actually using it. The pilot team delivers a proof-of-concept, but then there’s no mandate, budget, or urgency to implement it at scale. Without executive follow-through, even a promising pilot will wither on the vine.
No One Owns It (Absence of Role Clarity): The pilot was developed in a kind of organizational no-man’s land. An R&D group or an innovation lab ran the experiment, but it’s unclear who should take charge of it afterward. Operations assume IT will handle it; IT assumes the business unit will integrate it. In the end, nobody truly “owns” the outcome. When roles and responsibilities aren’t defined, the pilot becomes an orphan – talked about, but never adopted.
Fear of Failure: In many company cultures, failure is the unforgivable sin. So everyone plays it safe. The pilot stays small and superficial to avoid any visible flop, or it gets quietly buried at the first sign of trouble. Scaling it up would mean putting real budget and reputation on the line, and that’s scary in organizations where careers can live or die by quarterly results. It’s safer to keep the AI project in perpetual pilot mode because if it disappoints, one can always say “well, it was just an experiment” and move on.
Unmeasured Outcomes: Paradoxically, a lot of pilots launch without clear success metrics. There’s excitement about the technology, but no one decides how they will know if it’s working. Vague goals like “learn about AI” or vanity metrics like number of pilot users replace concrete business KPIs. With no hard evidence of benefit, there’s no case to expand the project. An unmeasured pilot is easy for skeptics to write off because nobody can point to a tangible win (or even a clear failure that could be learned from).
These issues make AI pilots performative rather than practical because organizations often gravitate to flashy tech experiments that generate good press, all while neglecting the boring foundational work that actually solves problems. It’s more thrilling to deploy a trendy AI chatbot or sensor network (something that can be shown off in a headline or conference) than to fix the mundane process or data issues plaguing day-to-day operations. The result is a veneer of innovation over an unchanged core. Leadership gets to check an “innovation” box and boast about transformation, but nothing really transforms. The pilot becomes a showpiece, a shiny object that distracts from the fact that fundamental issues remain unsolved. And employees, especially those who’ve been through rounds of layoffs and reorgs, see right through it. They know an “AI initiative” that isn’t backed by genuine commitment is just management playing make-believe.
Case Study: The Epic Faceplant of Articulate Rise 360’s AI Assistant
Articulate built its reputation as the easy button for e-learning. Rise made course authoring feel like a clean version of PowerPoint without the bloat. That simplicity drew a crowd. Over time, the habit hardened. When the market shifted to AI, the company reached for the same playbook: wrap a complex idea in a thin interface, ship fast, let customers sort out the gaps.
The AI Assistant in Rise arrived with a big promise: “Create course content nine times faster.” The demo sparkled and everyone was poised to be the next biggest thing in EdTech, again.
Then the drafts landed.
On first read, the text looked tidy. On a second pass, it thinned out. Paragraphs drifted. Claims lacked sources. Instructors grabbed red pens and never put them down. Minutes saved on typing turned into hours of fact checks, restructuring, and heavy edits to make something a learner could trust.
Wait, the AI Assistant doesn’t use Bloom’s? Wow, okay…so why don’t I just use ChatGPT?
The miss traces back to leadership and talent. Articulate is still run by its original founders, and the senior engineering leader came out of Microsoft 20 years ago. Strong pedigree for shipping software, no doubt. Building a friendlier slide tool takes sharp product instincts. Building an intelligent assistant for learning asks for a different bench. You need people who live and breathe cognitive load, scaffolding, transfer, practice design, and assessment. You need product managers who’ve put AI into real workflows, who know where to place guardrails and how to measure learning instead of word count. More importantly, you need product leaders who know how to lead those PMs.
That depth didn’t show up in the Assistant.
Reviewers saw the cracks right away. The feature offered little guidance on where it helps and where it harms. No frameworks. No patterns shaped by seasoned instructional designers. Customers were handed a promise and asked to take it on faith. Many tried, and many walked away.
This looks like a bench that never evolved. The company won by simplifying and zooming in on a specialized “version” of what Microsoft offered, a sharp move in 2002.
20+ years later, the frontier now asks for AI that fits live workflows, respects pedagogy, handles provenance, and gets better with feedback from real use. That takes new leaders and fresh expertise. Articulate kept swinging the old bat, and the result was a loss of trust and a lot of wasted time and money.
Leaders who hold the reins without adding new voices tend to reuse moves that once worked, and Articulate’s AI Assistant episode fits that pattern. With the original products, they were slapped together for a specific audience (EdTech), but were really just a more flowery version of PowerPoint. This time, they tried the same trick, and somehow created a less functional version of ChatGPT. The result is a loss of confidence, and a very public flop.
I promise, the next big claim from Articulate will face a much colder room after the launch of their AI Assistant.
Teams forgive an early stumble when they see humility and the right experts stepping in. What drains patience is a pattern of shallow launches that lack user centricity. Training orgs aren’t asking for a toy bolted onto an authoring tool. They want a partner that understands learning as a craft and uses AI to serve that craft. Until Articulate brings in people who can do that, every AI release risks becoming another shiny PowerPoint slide that goes nowhere.
”Articulate’s AI Assistant has some powerful features, but it’s not a magic bullet. It can speed up parts of your workflow, but it won’t replace the critical thinking and learning expertise that go into creating great experiences. If you’re willing to experiment and treat the AI as a collaborative partner (not the final word), this robust new tool is likely to increase your productivity and spark creativity.”
Translation: Just use ChatGPT for a fraction of the price.
Leadership, Culture, and the Trust Deficit
What’s really going on here? In case after case, perfectly good algorithms and tools never make a difference because of human factors like leadership, culture, and trust. An MIT initiative studying AI in business put it bluntly: the models and tech are powerful enough; the problem is adoption. In other words, the bottleneck is us. How our leaders lead (or don’t), how our organizations handle change (or don’t), and whether our people trust the folks in charge; these determine if an AI pilot ever leaves the lab.
Consider the environment in many companies today. Perhaps management championed an AI pilot publicly, but never truly invested in the messy follow-up work of integration, training, and process change. That’s a leadership failure and a lack of ownership combined with the courage to see things through. Or maybe the pilot was launched by one executive, but six months later that exec is gone or priorities have shifted. New leadership doesn’t want to continue the predecessor’s project, so it dies on the vine.
We’ve all seen how internal politics and ego can derail good ideas.
A culture of constant leadership churn or inconsistency will doom longer-term initiatives like AI adoption; employees learn to just wait out the fad of the month.
Then there’s the trust (or lack thereof) within the organization. Imagine you’re an employee who just watched a wave of layoffs sweep out colleagues, all while hearing the CEO extol a shiny new AI project. Are you going to feel excited and secure about helping implement that AI, or are you wondering if it’s going to be used as an excuse to cut more jobs? In many companies, workers have grown cynical about “innovations” that seem disconnected from reality on the ground. When leaders push performative projects without addressing core issues (or while ignoring employee morale), they hemorrhage trust. And without trust, no AI rollout stands a chance. People won’t give honest feedback about the pilot’s problems; they might even passively resist adoption, figuring it will be gone soon anyway. As one commentary on the public sector noted, it’s easier for officials to chase flashy tech for headlines than to do unglamorous work, leading to a “veneer of innovation” while basic services rot underneath. The corporate world has its own version of this: big talk about AI and digital transformation, but little willingness to invest in the unsexy fundamentals of data cleanup, employee training, and process overhaul. It’s leadership theater instead of leadership action.
Ultimately, these stalled AI pilots say far more about management than about any algorithm. They reveal a leadership and change-management breakdown. Leaders who truly want AI to succeed need to do more than sponsor a cool demo. They have to rally the organization around it, set clear accountability, allow for failures and learning, and stick with it beyond the initial hype. They need to foster a culture where new tech isn’t just a gimmick to get applause, but a tool people are empowered (and prepared) to use. And critically, they must rebuild internal trust by aligning projects with real problems and being honest about goals. When an AI initiative is actually about solving a painful problem, not just about looking innovative, it tends to get the necessary buy-in. When people at all levels feel included and safe to experiment, pilots have a chance to turn into real products. Without those human factors lined up, even the best AI tech will remain an academic exercise.
The End of the Charade?
Industry veterans have seen enough of these theatrics to last a lifetime. Many can swap war stories of big ideas that never left the pilot stage – sometimes the same idea, recycled every few years by a new leadership team that didn’t heed history. It’s no wonder there’s deep skepticism on the front lines. Performative innovation has a cost: it breeds cynicism and fatigue. Each successive “game-changing” pilot announcement is met with more eye-rolls and fewer volunteers, because people hate wasting their time on a charade. For those of us who have lived through wave after wave of empty promises (and the layoffs that often follow when those promises don’t pan out), the message to leaders is clear: it’s time to either commit to real change or stop the act.
If there’s a silver lining, it’s that a handful of organizations do get it right and they tend to be the ones who focus on genuine business pain points, take small but concrete steps, and empower their teams to carry projects over the finish line. Those are the rare 5% of pilots that actually scale and produce results. They succeed not because of some magic in the code, but because leadership and culture set them up for success. The truth is, AI can absolutely transform a business, but only when human beings in that business do the hard work of transformation.
There’s something darkly comic about watching companies repeat this pilot theater over and over. It’s like watching a play where the ending never changes. The audience already knows the lines by heart, yet the actors keep performing as if it’s the first time. Eventually, you have to laugh – or cry – at the absurdity of it. So here’s hoping the next time your company proposes an AI pilot, they actually mean it. No more free trial grandstanding. No more half-hearted science projects. Let’s see some follow-through. Otherwise, we’ll be doomed to keep re-running this same old show, stuck in an endless dress rehearsal for a premiere that never happens. And frankly, we’re ready to close the curtain on that act.
Sources:
MIT study on AI profits rattles tech investors — Madison Mills, Axios (Aug 21, 2025). A report from MIT’s NANDA initiative found that 95% of enterprise AI pilot projects delivered zero financial return, validating fears that most corporate AI investments aren’t paying off. This research, which analyzed 300 public AI initiatives, underscores the huge gap between AI hype and business impact. axios.comaxios.com
MIT Says 95% Of Enterprise AI Fails — Here’s What The 5% Are Doing Right — Jaime Catmull, Forbes (Aug 22, 2025). This Forbes piece breaks down MIT’s findings that the vast majority of AI pilots stall out. It explains that successful adopters narrow their focus to high-impact use cases and partner externally, whereas most companies chase trendy chatbot projects that never scale. The article offers strategic advice for business leaders to avoid “pilot purgatory” and actually realize AI’s value. muckrack.com, mexc.com
How to break the ‘AI hype cycle’ and make good AI decisions for your organization — Brian Eastwood, MIT Sloan (Jul 21, 2025). MIT Sloan recounts how many firms fall into “AI theater,” rushing into flashy pilots out of FOMO and ending up with failures. Akamai’s CTO notes a recurring pattern of “AI success, theater, FOMO, and failure”, and shares lessons on fostering true AI fluency (e.g. choosing the right tools, empowering employees) instead of performative experiments. mitsloan.mit.edu,
Why Most AI Pilots Never Reach Production — Camille Manso, InformationWeek (July 22, 2025). An industry analysis revealing that over 88% of AI proof-of-concepts never make it to full deployment. It cites poor pilot design — treating scaling as an afterthought, misaligned stakeholders, and lack of trust in AI outputs — as key reasons so many projects stagnate. The article outlines how organizations can build the foundations for scale during the pilot (data readiness, governance, change management) to avoid perpetual experimentation. informationweek.com
Los Angeles Unified’s AI Meltdown: 5 Ways Districts Can Avoid the Same Mistakes — Alyson Klein, Education Week (July 8, 2024). A cautionary case study from K-12 education: LAUSD launched an “game-changing” AI chatbot named Ed, only to shut it down five months later amid chaos. The piece details how an overhyped, poorly-defined pilot (built by a startup that later imploded) became a “cautionary tale.” Key takeaways include defining clear problems for AI to solve, vetting vendors, setting realistic timelines, and protecting data. edweek.org
Articulate Rise 360 AI: Our Review of the AI Assistant — Maestro Learning (June 2025). A detailed critique of Articulate’s attempt to bolt an AI content assistant onto its e-learning platform. While the feature promised to create course material “9× faster,” reviewers found the generative results “lack depth,” produce disjointed text, and require extensive fact-checking and editing. By prioritizing speed over substance, Articulate’s AI assistant became an example of performative innovation that delivers more hype than real help to users. (The review notes the tool’s bright spots, but overall signals that without real instructional design intelligence, the AI pilot falls flat). maestrolearning.com