GraphRAG 101: Increasing GenAI Accuracy and Completeness

GraphRAG 101: Increasing GenAI Accuracy and Completeness

Companies have been applying generative AI across a wide range of functions, including customer support, sales, legal, marketing and many others. As of 2024, 92% of the Fortune 500 have adopted GenAI in some form. Chances are, you’ve tried GenAI yourself, using something like ChatGPT to generate an answer to a question or GitHub Copilot to generate source code based on a spec you input.

Retrieval-Augmented Generation (RAG)

If you’re building generative AI applications, your design goals probably include ensuring AI-generated responses are accurate and comprehensive. One of the most popular ways to achieve this is to implement retrieval-augmented generation (RAG). With RAG, a vector database is typically used to retrieve information pertinent to the user query, then this retrieved information is submitted as contextual information to a large language model (LLM) along with the user query. The LLM uses this information to compose a response.

For example, in a customer support bot application, you might submit to the LLM one or more product support articles that are relevant to the customer inquiry, thus enabling it to generate a response based on specific, relevant information. If the support articles you supply include multiple causes or resolutions to the customer issue, the AI response might not provide the most accurate or complete advice.

What Is GraphRAG?

A newer and more accurate approach is graph retrieval-augmented generation (GraphRAG). GraphRAG is an evolutionary improvement over RAG wherein a “knowledge graph” is supplied to the LLM as context along with the query. Knowledge graphs are data structures that, unlike text documents, make the relationships between things explicitly clear.

Continuing with our customer support bot example from above, here’s what a relevant knowledge graph might look like:

Unlike a customer support write-up on the issue from which a solution must be inferred by an LLM’s analysis of the language in the supporting documentation, the knowledge graph makes the causes and solutions to the issue more clear. With this explicit understanding, the chatbot is able to ask more relevant questions about the issue based on causes and formulate a more accurate resolution.

In a published paper, LinkedIn described its use of GraphRAG within customer support. The team did exactly what we’re describing in this example: formed a knowledge graph of possible causes of and solutions to an issue and passed those to the LLM. During the first six months in production, GraphRAG reduced mean time to resolution (MTTR) by 28%.

The Origin of GraphRAG

GraphRAG might be new, but knowledge graphs and AI have a long history together. Over the past 50 years, knowledge graphs have been intertwined with AI research used to represent knowledge and reasoning. In 2012, Google introduced the modern knowledge graph, implemented as a massive graph database of people, places and things that enabled its web search to evolve beyond keyword search and start composing answers to questions.

In the 2000s, statistical methods, neural networks and machine learning took rise. These gained popularity and were successful in speech recognition and text-generation systems. In 2017, the transformer architecture made long-range dependencies in text easier to process. These innovations paved the way for the development of today’s LLMs by allowing for the training of deeper and larger models.

Microsoft coined the term GraphRAG in early 2024. In one sense, GraphRAG is an AI reunion of sorts between computational AI (LLMs) and recorded knowledge (knowledge graphs) to increase the accuracy, comprehensiveness and diversity of AI-generated responses.

How To Get Started With GraphRAG

If you’re already building generative AI applications, then you’re probably familiar with the retrieval-augmented generation (RAG) pattern. With RAG, you submit a query to an LLM, along with some contextual information (specific or proprietary information like product manuals, user data, etc.) from which the LLM can draw information for its response.

GraphRAG is the same pattern, except that it passes a knowledge graph as the context to the LLM along with the user’s query. There are multiple different architectures to combine graph with RAG; one such architecture is shown below:

In this architecture:

  1. A user submits a query.
  2. The query is embedded (turned into a vector) by an embedding model.
  3. A vector database is used to find relevant graph entry points (nodes) with a similarity search.
  4. Graph queries are executed to find related nodes to the returned node, forming knowledge graphs.
  5. These knowledge graphs and the original user query are formed into a prompt that the LLM can understand.
  6. The prompt is fed into the LLM to generate the output to be passed back to the user.

This is only one possible architecture for GraphRAG. There are many others, and which to use depends on the problem being solved, the maturity of the organization, the cohesiveness and quality of the knowledge graph and other factors.

A note about graph and vector database search: Rather than using separate graph and vector databases, you should seek a database that supports both graph and vector data indexing and search. Using separate databases increases cost as well as application and data management complexity from trying to keep the databases in sync.

In Summary

If you’re building generative AI applications and are seeking ways to improve the quality and accuracy of your AI system output, I urge you to look into GraphRAG. Early research has shown that it significantly improves business uses of GenAI, such as cutting time to resolve customer support issues. The technology curves you’ll want to climb include graph data modeling and, very likely, graph databases. There are a number of resources available — online courseware, meetups, graph database vendors and others to help you master those subjects. It’s not a hard climb, and definitely worth it if you want to create differentiated GenAI solutions that meet or exceed the expectations of your project stakeholders.

When vector search is combined with other search methods, it can greatly improve AI applications. Hear experts from Aerospike and Forrester explain how to combine vector search, knowledge graphs and key-value queries in different database application architectures, including RAG and GraphRAG.

The post GraphRAG 101: Increasing GenAI Accuracy and Completeness appeared first on The New Stack.

RECENT POSTS

Leave a Reply

Your email address will not be published. Required fields are marked *