Four Obstacles to Enterprise-Scale Generative AI

Four Obstacles to Enterprise-Scale Generative AI

The road to enterprise-scale adoption of generative AI remains difficult as businesses scramble to harness its potential. Those who have moved forward with generative AI have realized a variety of business improvements. Respondents to a Gartner survey reported 15.8% revenue increase, 15.2% cost savings and 22.6% productivity improvement on average.

However, despite the promise the technology holds, 80% of AI projects in organizations fail, as noted by  Rand Corporation. Additionally, Gartner’s survey found that only 30% of AI projects move past the pilot stage.

While some companies may have the resources and expertise required to build their own generative AI solutions from scratch, many underestimate the complexity of in-house development and the opportunity costs involved. While more control and flexibility are promised by in-house enterprise AI development, the reality is usually accompanied by unforeseen expenses, technical difficulties, and scalability issues.

Following are four key challenges that can thwart internal generative AI projects.

1. Safeguarding Sensitive Data

(HAKINMHAN/Shutterstock)

Access control lists (ACLs)–a set of rules that determine which users or systems can access a resource–play a vital role in protecting sensitive data. However, incorporating ACLs into retrieval augmented generation (RAG) applications presents a significant challenge. RAG, an AI framework that improves the output of large language models (LLMs) by enhancing prompts with corporate knowledge or other external data, heavily relies on vector search to retrieve relevant information. Unlike traditional search systems, adding ACLs to vector search dramatically increases computational complexity, often resulting in performance slowdowns. This technical obstacle can hinder the scalability of in-house solutions.

Even for businesses with the resources to build AI solutions, enforcing ACLs at scale is a major hurdle. It demands specialized knowledge and capabilities that most internal teams simply do not possess.

2. Ensuring Regulatory and Corporate Compliance

In highly regulated industries like financial services and manufacturing, adherence to both regulatory and corporate policies is mandatory. This applies not only to human employees but also to their generative AI counterparts, who are playing an increasing role in both front-end and back-end operations. To mitigate legal and operational risks, generative AI systems must be equipped with AI guardrails that ensure ethical and compliant outputs, while also maintaining alignment with brand voice and regulatory requirements, such as ensuring compliance with FINRA regulations in the financial space.

Many in-house proofs of concept (PoCs) struggle to fully meet the stringent compliance standards of their respective industries, creating risks that can hinder large-scale deployment. As noted, Gartner found that at least 30% of generative AI projects will be abandoned after PoC by the end of this year.

3. Maintaining Strong Enterprise Security

(greenbutterfly/Shutterstock)

In-house generative AI solutions often encounter significant security challenges, such as protecting sensitive data, meeting information security standards, and ensuring security during enterprise systems integration. Addressing these issues requires specialized expertise in generative AI security, which many organizations new to the technology do not have, raising the potential for data leaks, security breaches, and compliance concerns.

4. Expanding Across Use Cases

Building a generative AI application for a single use case is relatively simple but scaling it to support additional use cases often requires starting from square one each time. This leads to escalating development and maintenance costs that can stretch internal resources thin.

Scaling up also introduces its own set of challenges. Taking in millions of live documents across multiple repositories, supporting thousands of users, and handling complex ACLs can rapidly drain resources. This not only raises the chances of delaying other IT projects but can also interfere with daily operations.

According to an Everest Group survey, even when pilots do go well, CIOs find solutions are hard to scale, noting a lack of clarity on success metrics (73%), cost concerns (68%) and the fast-evolving technology landscape (64%).

The issue with in-house generative AI projects is that oftentimes companies fail to see the complexities involved in data preparation, infrastructure, security, and maintenance.

Scaling AI solutions requires significant infrastructure and resources, which can be costly and complex. Most organizations that run small pilots on a couple of thousand documents haven’t thought through what it takes to bring that up to scale: from the infrastructure to the types of embedding models and their cost-precision ratios.

Building permission-enabled, secure generative AI at scale with the required accuracy is really hard, and the vast majority of companies that try to build it themselves will fail. Why? Because it takes expertise, and addressing these challenges isn’t their USP.

Making the decision to adopt a pre-built platform or develop generative AI solutions internally requires careful consideration. If an organization chooses the wrong path, it could lead to a deployment that drags on, stalls, or hits a dead end, resulting in wasted time, talent, and money. Whatever route an organization selects, it should ensure it has the generative AI technology it needs to be agile, enabling it to rapidly respond to customers’ evolving requirements and stay ahead of the competition. It’s a question of who can get there the fastest with the secure, compliant, and scalable generative AI solutions needed to do this.

About the author: Dorian Selz is CEO of Squirro, a global leader in enterprise-grade generative AI and graph solutions. He co-founded the company in 2012. Selz is a serial entrepreneur with more than 25 years of experience in scaling businesses. His expertise includes semantic search, AI, natural language processing and machine learning.

Related Items:

LLMs and GenAI: When To Use Them

What’s the Hold Up On GenAI?

Focus on the Fundamentals for GenAI Success

The post Four Obstacles to Enterprise-Scale Generative AI appeared first on BigDATAwire.