Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Enterprise AI teams face a costly dilemma: build sophisticated agent systems that lock them into specific large language model (LLM) vendors, or constantly rewrite prompts and data pipelines as they switch between models. Financial technology giant Intuit has solved this problem with a breakthrough that could reshape how organizations approach multi-model AI architectures. 

Like many enterprises, Intuit has built generative AI-powered solutions using multiple large language models (LLMs). Over the last several years, Intuit’s Generative AI Operating System (GenOS) platform has been steadily advancing, providing advanced capabilities to the company’s developers and end-users, such as Intuit Assist. The company has increasingly focused on agentic AI workflows that have had a measurable impact on users of Intuit’s products, which include QuickBooks, Credit Karma and TurboTax.

Intuit is now expanding GenOS with a series of updates that aim to improve productivity and overall AI efficiency. The enhancements include an Agent Starter Kit that enabled 900 internal developers to build hundreds of AI agents within five weeks. The company is also debuting what it calls an “intelligent data cognition layer” that surpasses traditional retrieval-augmented generation approaches.

Perhaps even more impactful is that Intuit has solved one of enterprise AI’s thorniest problems: how to build agent systems that work seamlessly across multiple large language models without forcing developers to rewrite prompts for each model.

“The key problem is that when you write a prompt for one model, model A, then you tend to think about how model A is optimized, how it was built and what you need to do and when you need to switch to model B,” Ashok Srivastava, Chief Data Officer at Intuit told VentureBeat. “The question is, do you have to rewrite it? And in the past, one would have to rewrite it.”

How genetic algorithms eliminate vendor lock-in and reduce AI operational costs

Organizations have found multiple ways to use different LLMs in production. One approach is to use some form of LLM model routing technology, which uses a smaller LLM to determine where to send a query. 

Intuit’s prompt optimization service is taking a different approach. It’s not necessarily about finding the best model for a query but rather about optimizing a prompt for any number of different LLMs. The system uses genetic algorithms to create and test prompt variants automatically.

“The way the prompt translation service works is that it actually has genetic algorithms in its component, and those genetic algorithms actually create variants of the prompt and then do internal optimization,” Srivastava explained. “They start with a base set, they create a variant, they test the variant, if that variant is actually effective, then it says, I’m going to create that new base and then it continues to optimize.”

This approach delivers immediate operational benefits beyond convenience. The system provides automatic failover capabilities for enterprises concerned about vendor lock-in or service reliability. 

“If you’re using a certain model, and for whatever reason that model goes down, we can translate it so that we can use a new model that might be actually operational,” Srivastava noted.

Beyond RAG: Intelligent data cognition for enterprise data

While prompt optimization solves the model portability challenge, Intuit’s engineers identified another critical bottleneck: the time and expertise required to integrate AI with complex enterprise data architectures. 

Intuit has developed what it calls an “intelligent data cognition layer” that tackles more sophisticated data integration challenges. The approach goes far beyond simple document retrieval and retrieval augmented generation (RAG).

For example, if an organization gets a data set from a third party with a certain specific schema that the organization is largely unaware of, the cognition layer can help. He noted that the cognition layer understands the original schema as well as the target schema and how to map them.

This capability addresses real-world enterprise scenarios where data comes from multiple sources with different structures. The system can automatically determine context that simple schema matching would miss.

Beyond gen AI, how Intuit’s ‘super model’ helps to improve forecasting and recommendations

The intelligent data cognition layer enables sophisticated data integration, but Intuit’s competitive advantage extends beyond generative AI to how it combines these capabilities with proven predictive analytics.

The company operates what it calls a “Super Model” – an ensemble system that combines multiple prediction models and deep learning approaches for forecasting, plus sophisticated recommendation engines.

Srivastava explained that the supermodel is a supervisory model that examines all of the underlying recommendation systems. It considers how well those recommendations have worked in experiments and in the field and, based on all of that data, takes an ensemble approach to making the final recommendation. This hybrid approach enables predictive capabilities that pure LLM-based systems cannot match.

The combination of agentic AI with predictions will help enable organizations to look into the future and see what could happen, for example, with a cash flow-related issue. The agent could then suggest changes that can be made now with the user’s permission to help prevent future problems.

Implications for enterprise AI strategy

Intuit’s approach offers several strategic lessons for enterprises looking to lead in AI adoption. 

First, investing in LLM-agnostic architectures from the beginning can provide significant operational flexibility and risk mitigation. The genetic algorithm approach to prompt optimization could be particularly valuable for enterprises operating across multiple cloud providers or those concerned about model availability.

Second, the emphasis on combining traditional AI capabilities with generative AI suggests that enterprises shouldn’t abandon existing prediction and recommendation systems when building agent architectures. Instead, they should look for ways to integrate these capabilities into more sophisticated reasoning systems.

This news means the bar for sophisticated agent implementations is being raised for enterprises adopting AI later in the cycle. Organizations must think beyond simple chatbots or document retrieval systems to remain competitive, focusing instead on multi-agent architectures that can handle complex business workflows and predictive analytics.

The key takeaway for technical decision-makers is that successful enterprise AI implementations require sophisticated infrastructure investments, not just API calls to foundation models. Intuit’s GenOS demonstrates that competitive advantage comes from how well organizations can integrate AI capabilities with their existing data and business processes.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *