2024 will be a pivotal year as enterprises shift their focus from generative AI experimentation to practical implementation. There will be growing pains, but those who navigate them successfully will gain a real competitive advantage.
After a year of heavy experimentation and inflated expectations, 2024 will be the year enterprises move generative AI from pilots and prototypes into full-scale production systems. This shift from exploration to implementation will force enterprises to tackle thorny challenges around delivering ROI, managing costs, and ensuring responsible AI practices.
At AI21, we foresaw these implementation challenges on the horizon. Rather than get caught up in the hype, we've been diligently building the solutions enterprises need to successfully deploy AI across their organizations. Our suite of practical, optimized models provide the impetus enterprises need to drive business value from AI. And our task-specific models ensure new capabilities translate quickly into real-world impact.
The hype around generative AI's potential has collided with the practical realities of deploying these systems at scale. As our leadership predicts below, some companies may opt for open-source models and in-house development, while many will seek turnkey solutions that target specific business needs right out of the box. While we’ve seen successful use cases emerge, taking them into production requires overcoming hurdles with unit economics, compliance, privacy, and security.
Overall, 2024 will be a pivotal year as enterprises shift their focus from generative AI experimentation to practical implementation. There will be growing pains, but those who navigate them successfully will gain a real competitive advantage.
Here’s our predictions for the upcoming year:
In 2024, enterprises will increasingly prioritize production-grade AI solutions with a focus on Business Value and Total Cost of Ownership. Focus is going to shift from flashy demos to reliable solutions in production. Robust evaluation frameworks and built-in verification mechanisms will increase overall quality and decrease hallucinations.
LLMs are necessary but not sufficient. They will never be as reliable as the enterprise demands without a system around them. 2024 will be the year where enterprises build and adopt AI systems that go beyond LLMs with retrievers (RAG), tools and other elements.
We'll see systems that are capable of continuous learning, reasoning, decision-making and handling complex tasks. These agent systems will be deeply embedded within our workflows and will empower the workforce in surprising ways.
2024 will be the year of deployment for AI in enterprises. 2023 was the year of exploration and evaluation. This year, companies have crafted their budgets with AI in mind. For many companies this will look like supercharging productivity of their knowledge workers with reading and writing assistants and selecting an important line-of-business application where they want to tailor a language model to their corporate knowledge base, brand, tone and manner. This could be a key customer-facing application in sales, marketing and/or customer service. The important difference about 2024 is that these technologies will leave the realm of trials and pilots and become part of the everyday workflow for employees and customers.
In 2024 companies will make important decisions about the role AI will play in their business strategies. Specifically, they will be considering if they will be a consumer of AI technology and/or a supplier of AI for other companies. Most companies have developed and maintain proprietary data and best practices that are material to their company strategy. Many of these companies are interested in training language models with these company assets to create scale and efficiency internally. There is also an opportunity for a new business line for these companies to offer their customized language models as a SaaS product in the marketplace.
For example, a financial services firm that has developed leading market analysis capabilities and a supporting knowledge base can choose to train a language model with those abilities and use it as an internal tool available for their financial advisors. In addition, they could decide to offer the customized model as a SaaS service to other companies in the financial services industry - becoming a supplier of AI services. This opens a new high margin and scalable business line as a technology provider. These companies can build significant competitive differentiation by being early to market because accuracy and utility grows geometrically with LLM usage, data and feedback.
Enterprises will become increasingly vigilant on which POCs to conduct and will demand line-of-sight to business value before committing money and resources on purely gut feel.
Enterprises will start to start to think much more actively about production use cases (vs POCs), and with it they will start to dig deep into the Total Cost of Ownership (TCO) of LLMs in production. By TCO, I mean not just the cost paid to LLM provider, but the resourcing needs to deploy and maintain the efficacy of the LLMs in production.
With an estimated investment of $2-3 billion, 2023 was a year of GenAI hype and exploration. 2024 will be a year where AI will be adopted on a larger scale in production. GenAI will be experienced by employees and users more regularly through various interfaces, such as banking, eCommerce websites, insurance inquiries, and many more, accelerating adoption in the market.
GenAI demand will shift from GenAI-enabled products, which have been favored by the market in 2023, toward deeper and more customizable tech, like LLM with agentic capabilities. This will result in two complementary motions: (1) Companies will develop their own internal skills to build tailored GenAI use cases and establish a unique approach to GenAI adoption; (2) agentic capabilities will mature, making it more suitable for organizations to develop their own agents instead of relying on external applications.
Enterprises will see that getting ROI on generative AI is harder than it seems. In particular, it requires more than doing a quick evaluation of model capabilities, picking the best one and giving their developers some API credits. There will be a shift towards end-to-end solutions that target specific workflows and tasks.
Some enterprises will still want to use open source models and develop a lot of tech in-house on top of open source. For most enterprises, it will be too hard to support this in-house development at a pace that keeps up with innovation happening in the market, so buyers will dominate while builders remain the minority.
Winning use-cases will emerge and enterprises will want to take them to production at scale. At this point they'll encounter challenges around unit economics, compliance, privacy and security.
If you're experimenting and ready to move your AI pilots into production, speak with our experts today about scaling AI for real business impact.