Survey: Nearly 1/3 of GenAI users look to RAG for information handling

Survey: Nearly 1/3 of GenAI users look to RAG for information handling

Security and quality of data indicate need for better data governance – joins, insufficient skills, and lack of context awareness as are the greatest risks to leveraging Generative AI

While the generative AI (GenAI) revolution is rolling forward at full steam, it’s not without its share of fear, uncertainty, and doubt according to the new State of Play on LLM and RAG: Preparing Your Knowledge Organization for Generative AI survey. Sponsored by Graphwise, the leading Graph AI provider and the newly formed company as the result of the recent merger of Ontotext with Semantic Web Company, the study found the great promises that can be delivered through large language models (LLMs) are tainted by concerns over hallucinations, bias, data security, black-box decisioning, and outdated information. Security and quality of data were seen as the greatest risks as 71% said their increased usage of GenAI was a risk in terms of output. There is almost unanimous agreement (99%) that humans need to stay close by to mitigate such risks. 

Despite these concerns, LLMs are becoming pervasive across most organizations, especially in the testing and development stage as 85% of respondents are either exploring and testing their potential or have LLMs in production. Nine in ten will keep expanding their LLM implementations with content creation and knowledge discovery as primary applications. More than three out of five (67%) seek to employ LLMs to help employees access insights, followed closely by expected employee productivity gains and reducing the time for knowledge workers to access the information (65% respectively).

To meet objectives like these, users of generative AI are looking to retrieval-augmented generation (RAG) environments for improved contextual results, actionable data, and reduced time to insights with precision and traceability. Modern approaches such as knowledge graphs were cited as a critical way to leverage an organization’s structured and unstructured data and establish a grounding for RAG systems so they can remove the typical barriers and risks to generative AI success. Close to a third are exploring RAG environments to support their information handling as 29% have RAG solutions in place or are implementing these solutions to bridge the gap between corporate databases and LLMs. 

Most agreed that their businesses would depend on it, as close to half felt RAG will help make information more actionable and closer to real time. The strategic value of LLMs and RAG lies in their ability to transform how organizations manage and utilize knowledge, leading to improved productivity, better decision-making, enhanced customer experiences, and increased efficiency.

“Large organizations are enamored by the promise of AI and how it can transform proprietary insights into a competitive advantage. As the study confirms, companies are eager to invest in generative AI yet, without rigorous data quality control, these investments risk being squandered on training AI models with irrelevant or inaccurate data, leading to flawed outcomes and hindering the anticipated return on investment,” said Andreas Blumauer, Senior VP Growth, Graphwise. “Wrong decisions cost money and often cause irreparable reputational damage from inaccurate results, misguided or biased decisions, and dangerous hallucinations. Combining knowledge graph infrastructure with semantic AI technology is a vital step and comparatively small investment for AI to forge its place as a permanent part of business infrastructure.”

The post Survey: Nearly 1/3 of GenAI users look to RAG for information handling first appeared on AI-Tech Park.