It seems every company has generative artificial intelligence efforts underway. ChatGPT's arrival marked the first stage, and a few intrepid souls tried it out on their own and gained experience. For many organizations, the next phase was establishing policies and banning public AI systems. Now we've entered the third phase: large-scale internal AI trials. Businesses are setting up teams that manage and test AI, while everyone else is told to "sit tight" and wait for the official, sanctioned platform.
However, the majority of these AI pilot projects are not going well. Between 70% and 95% corporate AI initiatives fail to provide discernible value, according to recent research from MIT.1 But if you’re familiar with IT history, the same was said of enterprise software projects in the 1990s. Back then, a large team of stakeholders would write a book of specifications, then start a one-way stream of writing code until the final release was made. Sometimes it took too long, and the project was canceled.
Some insiders are speculating that we may be in an “AI Winter” – a period when we believe AI has value, but nobody is investing anymore. Looking at the major investments in AI-supporting infrastructure, that's clearly not the case. Instead, we're entering an era of AI Skepticism, where business leaders are becoming more cautious about launching new, expensive initiatives. And we expect this skepticism to slow further as equity markets drop.
For voice service providers, generative AI promises to derive value from an underused resource: the content of audio calls. AI-driven call transcription, with full regulatory compliance, will enable phone calls to become sources of real-time insight. It will provide support during calls to improve call productivity and provide follow-up call information in the form of call summaries.
But to succeed, you'll need to avoid the risks that lead to AI project failure. Here's how.
Many companies are developing AI tools to sell, while relatively few are using them internally. Often, this is because the tools the service provider is trying to build and sell are for other people. For example, a service provider might offer an AI voice agent, but it's not quite sophisticated enough to use for their own workflows. Or a voice provider might offer call transcription to customers, but their own regulatory rules prevent them from using it.
In an October 2025 panel, Micaela Poirette of Comcast pointed to the irony of trying to build AI products without using them internally. She highlighted the need to use the tools we build, saying, "We're trying to sell AI, right? We want to enable businesses to use AI, but we need to use AI in order to develop AI solutions that can be customized and personalized."
The infrastructure for building private AI is expensive, and the skillset to deploy open-source foundation models is new. As a result, AI projects often start with a heavy focus on building a new technology stack. Unfortunately, this puts the focus on the tech rather than on solving business problems.
Some of the companies delivering Voice AI are so focused on the technology that they're not applying it to the actual outcomes businesses need to operate. For example, if you visit the main pages for many AI voice agents on the market today, you'll find source code but no specific business example. Their marketing ends up looking more like a GitHub project than a business enabler.
According to the MIT study, AI applications that are narrow and focused tend to have a higher success rate. Simply put, don't start your projects too broadly.
Finnbar Begley, an analyst at the Cavell Group, said, "One of the worst questions you can ask is, 'How do we get all of your data into a new system?' Ask which data is the most valuable for the use case they're trying to deploy and bring the minimum amount with you, and then integrate with just the right systems to make that system work."
How should this look for a voice service provider's AI project?
You can increase the chances of success by narrowing the project's scope. Think small and focused.
When you're working to bring a Voice AI product to market, you can either task in-house developers to build the service internally or you can acquire another AI product externally.
Many business leaders believe that any substantive AI tools should be built internally. But, according to the MIT research, external partnerships have twice the success rate as internal builds…or to put it skeptically, internal builds have twice the failure rate as buying existing AI products. One reason for these failures is the fragility of internally developed products: they're simply not as robust as those that have been widely tested. If you ask in-house developers to set up a web application, they'll likely have years of experience. But ask them to build a Retrieval-Augmented Generation (RAG) AI application for querying your trove of business data, and very few will have experience doing it.
The goal should be to identify the components available on the market and put them to work. Faster wins are coming from ready-to-use platforms that require minimal customization.
It's a mistake to keep AI knowledge in a box, but many organizations are setting up separate teams to handle AI at their firms rather than distributing the knowledge throughout. To cite computing history again, AI tools are more like a word processor than the mainframe in the basement. If you think of AI as something with its own agency, then you certainly need to keep it in a cage. But in truth, AI is just an ordinary technology with a cycle of innovation and adoption.
While sectioning AI into a separate lab has been a popular approach, the productivity and the appeal have faded. Meta is winding down its "AI Superintelligence Lab," and Apple's "AI/ML" team is under fire for its slow approach to improving its product.
Ultimately, you want to get your people using AI tools effectively. Here's what the MIT authors had to say about successful AI deployments: "Many of the strongest enterprise deployments began with power users, employees who had already experimented with tools like ChatGPT or Claude for personal productivity. These 'prosumers' intuitively understood GenAI's capabilities and limits, and became early champions of internally sanctioned solutions. Rather than relying on a centralized AI function to identify use cases, successful organizations allowed budget holders and domain managers to surface problems, vet tools, and lead rollouts."
The bottom line here is that you should find ways for managers and individual contributors to put AI tools to work for themselves.
Whether you're developing AI-powered voice agents, transcription tools, or call summarization features, the end goal is to create real value for customers. But success depends on more than external rollout. You should also be using these tools internally to validate their effectiveness and accelerate development.
Voice service providers looking to move from experimentation to impact can turn to ECG for guidance. Our engineering experts help providers stay aligned with what works – and avoid the roadblocks that stall progress. Get started today.
Sources: