Skip to content

tigergraph/graphrag

Repository files navigation

TigerGraph GraphRAG

⚠️ Disclaimer

  • Supported Backend: TigerGraph is the only Vector and Graph DB supported in this project. Hybrid Search is the officially retriever method supported at backend.
  • Limitations: No official support is provided unless delivered through a Statement of Work (SOW) with the Solutions team. Customizations are customer-owned self-service to handle custom LLM service, prompt logic, UI integration, and pipeline orchestration. This project is provided "as is" without any warranties or guarantees.

Table of Contents

Releases

  • **9/22/2025: GraphRAG is available now officially v1.1 (v1.1.0). AWS Bedrock support is completed with BDA integration for multimodal document ingestion.
  • **6/18/2025: GraphRAG is available now officially v1.0 (v1.0.0). TigerGraph database is the only graph and vector storagge supported. Please see Release Notes for details.

Overview

GraphRAG Overview

TigerGraph GraphRAG is an AI assistant that is meticulously designed to combine the powers of vector store, graph databases and generative AI to draw the most value from data and to enhance productivity across various business functions, including analytics, development, and administration tasks. It is one AI assistant with two core component services:

  • A natural language assistant for graph-powered solutions
  • A knowledge Q&A assistant for documents and graphs

You can interact with GraphRAG through the built-in chat interface and APIs. For now, your own LLM services (from OpenAI, Azure, GCP, AWS Bedrock, Ollama, Hugging Face and Groq.) are required to use GraphRAG, but in future releases you can use TigerGraph’s LLMs.

Nature Language Query

Nature Language Query

When a question is posed in natural language, GraphRAG employs a novel three-phase interaction with both the TigerGraph database and a LLM of the user's choice, to obtain accurate and relevant responses.

The first phase aligns the question with the particular data available in the database. GraphRAG uses the LLM to compare the question with the graph’s schema and replace entities in the question by graph elements. For example, if there is a vertex type of BareMetalNode and the user asks How many servers are there?, the question will be translated to How many BareMetalNode vertices are there?. In the second phase, GraphRAG uses the LLM to compare the transformed question with a set of curated database queries and functions in order to select the best match. In the third phase, GraphRAG executes the identified query and returns the result in natural language along with the reasoning behind the actions.

Using pre-approved queries provides multiple benefits. First and foremost, it reduces the likelihood of hallucinations, because the meaning and behavior of each query has been validated. Second, the system has the potential of predicting the execution resources needed to answer the question.

Knowledge Graph Query

Knowledge Graph Query

For inquiries cannot be answered with structured graph data, GraphRAG employs an AI chatbots with graph-augmented Knowledge Graph based on a user's own documents or text data. It builds a knowledge graph from source material and applies its unique variant of knowledge graph-based RAG (Retrieval Augmented Generation) to improve the contextual relevance and accuracy of answers to natural-language questions.

GraphRAG will also identify concepts and build an ontology, to add semantics and reasoning to the knowledge graph, or users can provide their own concept ontology. Then, with this comprehensive knowledge graph, GraphRAG performs hybrid retrievals, combining traditional vector search and graph traversals, to collect more relevant information and richer context to answer users’ knowledge questions.

Organizing the data as a knowledge graph allows a chatbot to access accurate, fact-based information quickly and efficiently, thereby reducing the reliance on generating responses from patterns learned during training, which can sometimes be incorrect or out of date.

Go back to top

Getting Started

Prerequisites

  • Docker + Docker Compose Plugin, or Kubernetes
  • TigerGraph DB 4.2+.
  • API key of your LLM provider. (An LLM provider refers to a company or organization that offers Large Language Models (LLMs) as a service. The API key verifies the identity of the requester, ensuring that the request is coming from a registered and authorized user or application.) Currently, GraphRAG supports the following LLM providers: OpenAI, Azure OpenAI, GCP, AWS Bedrock.

Quick Start

Use TigerGraph Docker-Based Instance

Set your LLM Provider (supported openai or gemini) api key as environment varabiel LLM_API_KEY and use the following command for a one-step quick deployment with TigerGraph Community Edition and default configurations:

curl -k https://raw.githubusercontent.com/tigergraph/graphrag/refs/heads/main/docs/tutorials/setup_graphrag.sh | bash

The GraphRAG instances will be deployed at ./graphrag folder and TigerGraph instance will be available at http://localhost:14240. To change installation folder, use bash -s -- <graphrag_folder> <llm_provider> instead of bash at the end of the above command.

Note: for other LLM providers, manually update configs/server_config.json accordingly and re-run docker compose up -d

Use Pre-Installed TigerGraph Instance

Similar to the above setup, and use the following command for a one-step quick deployment connecting to a pre-installed TigerGraph with default configurations:

curl -k https://raw.githubusercontent.com/tigergraph/graphrag/refs/heads/main/docs/tutorials/setup_graphrag_tg.sh | bash

The GraphRAG instances will be deployed at ./graphrag folder and connect to TigerGraph instance at http://localhost:14240 by default. To change installation folder, TigerGraph instance location or username/password, use bash -s -- <graphrag_folder> <llm_provider> <tg_host> <tg_port> <tg_username> <tg_password> instead of bash at the end of the above command.

Go back to top

Deploy GraphRAG Manually

The GraphRAG services can be deployed manually using Docker Compose or Kubernetes with updated configurations for different use cases.

Manual Deploy of GraphRAG with Docker Compose

Step 1: Get docker-compose file

Download the docker-compose.yml file directly

The Docker Compose file contains all dependencies for GraphRAG including a TigerGraph database. If you want to use a separate TigerGraph instance, you can comment out the tigergraph section from the docker compose file and restart all services. However, please follow the instructions below to make sure your standalone TigerGraph server is accessible from other GraphRAG containers.

Step 2: Set up configurations

Next, download the following configuration files and put them in a configs subdirectory of the directory contains the Docker Compose file:

Here’s what the folder structure looks like:

    graphrag
    ├── configs
    │   ├── nginx.conf
    │   └── server_config.json
    └── docker-compose.yml
Step 3: Adjust configurations

Edit llm_config section of configs/server_config.json and replace <YOUR_OPENAI_API_KEY> to your own OPENAI_API_KEY.

If desired, you can also change the model to be used for the embedding service and completion service to your preferred models to adjust the output from the LLM service.

Step 4: Configure Logging Level in Dockerfile (Optional)

To configure the logging level of the service, edit the Docker Compose file.

By default, the logging level is set to "INFO".

ENV LOGLEVEL="INFO"

This line can be changed to support different logging levels.

The levels are described below:

Level Description
CRITICAL A serious error.
ERROR Failing to perform functions.
WARNING Indication of unexpected problems, e.g. failure to map a user’s question to the graph schema.
INFO Confirming that the service is performing as expected.
DEBUG Detailed information, e.g. the functions retrieved during the GenerateFunction step, etc.
DEBUG_PII Finer-grained information that could potentially include PII, such as a user’s question, the complete function call (with parameters), and the LLM’s natural language response.
NOTSET All messages are processed.
Step 5: Start all services

Now, simply run docker compose up -d and wait for all the services to start.

Note: graphrag container will be down if TigerGraph service is not ready. Log into the tigergraph container, bring up tigergraph services and rerun docker compose up -d should resolve the issue.

Step 6: Stop all services (when needed)

Run command docker compose down and wait for all the service containers to stopped and removed.

Go back to top

Use Standalone TigerGraph instance (If preferred)

Note: Vector feature is available in both TigerGraph Community Edition 4.2.0+ and Enterprise Edition 4.2.0+.

If you prefer to start a TigerGraph Community Edition instance without a license key, please make sure the container can be accessed from the GraphRAG containers by add --network graphrag_default:

docker run -d -p 14240:14240 --name tigergraph --ulimit nofile=1000000:1000000 --init --network graphrag_default -t tigergraph/community:4.2.1

Use tigergraph/tigergraph:4.2.1 if Enterprise Edition is preferred. Setting up DNS or /etc/hosts properly is an alternative solution to ensure contains can connect to each other. Or modifyhostname in db_config section of configs/server_config.json and replace http://tigergraph to your tigergraph container IP address, e.g., http://172.19.0.2.

Check the service status with the following commands:

docker exec -it tigergraph /bin/bash
gadmin status
gadmin start all

After using the database, and you want to shutdown it, use the following shell commmand

gadmin stop all

Go back to top

Manual Deploy of GraphRAG with Kubernetes

Step 1: Get kubernetes deployment file

Download the graphrag-k8s.yml file directly

Step 2: Modify graphrag-k8s.yml (Optional)

Remove the sections for tigergraph instance if you're using a standalone TigerGraph instance instead

Step 3: Set up server configurations

Next, in the same directory as the Kubernetes deployment file is in, create a configs directory and download the following configuration files:

Update the TigerGraph database information, LLM API keys and other configs accordingly.

Step 4: Install Nginx Ingress (Optional)

If Nginx Ingress is not installed yet, it can be installed using kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.1/deploy/static/provider/cloud/deploy.yaml

Step 5: Start all services

Replace /path/to/graphrag/configs with the absolute path of the configs folder inside graphrag-k8s.yml, and update the TigerGraph database information and other configs accordingly.

Now, simply run kubectl apply -f graphrag-k8s.yml and wait for all the services to start.

Step 6: Stop all services (Optional)

Run kubectl delete -f graphrag-k8s.yml and wait for all the services in the deployment to be deleted.

Note: Nginx Ingress should be deleted using kubectl delete -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.1/deploy/static/provider/cloud/deploy.yaml if port 80 needs to be released

Go back to top

Use TigerGraph GraphRAG

GraphRAG is friendly to both technical and non-technical users. There is a graphical chat interface as well as API access to GraphRAG. Function-wise, GraphRAG can answer your questions by calling existing queries in the database, build a knowledge graph from your documents, and answer knowledge questions based on your documents.

Run Demo with Preloaded GraphRAG

The pre-loaded knowledge graph TigerGraphRAG is provided for an express access to the GraphRAG features.

Step 1: Get data package

Download the following data file and put it under /home/tigergraph/graphrag/ inside your TigerGraph container:

Use the following commands if the file cannot be downloaded inside the TigerGraph container directly:

docker exec -it tigergraph mkdir -p /home/tigergraph/graphrag
docker exec -it tigergraph curl -kL https://raw.githubusercontent.com/tigergraph/graphrag/refs/heads/main/docs/data/ExportedGraph.zip -o /home/tigergraph/graphrag/ExportedGraph.zip

Note: command should be changed to equivalent formats if standalone TigerGraph instance is used

Step 2: Import data package

Next, log onto the TigerGraph instance and make use of the Database Import feature to recreate the GraphRAG:

docker exec -it tigergraph /bin/bash
gsql "import graph all from \"/home/tigergraph/graphrag\""
gsql "install query all"

Wait until the following output is given:

[======================================================================================================] 100% (26/26)
Query installation finished.

Step 3: Access Chatbot UI

Open your browser to access http://localhost:<nginx_port> to access GraphRAG Chat. For example: http://localhost:80

Enter the username and password of the TigerGraph database to login.

Chat Login

On the top of the page, select Community Search as RAG pattern and TigerGraphRAG as Graph. RAG Config

In the chat box, input the question how to load data to tigergraph vector store, give an example in Python and click the send button. Demo Question

You can also ask other questions on statistics and data inside the TigerGraph database. Data

Go back to top

Manually Build GraphRAG From Scratch

If you want to experience the whole process of GraphRAG, you can build the GraphRAG from scratch. However, please review the LLM model and service setting carefully because it will cost some money to re-generate embedding and data structure for the raw data.

Step 1: Get demo script

The following scripts are needed to run the demo. Please download and put them in the same directory ./graphrag as the Docker Compose file:

Step 2: Download the demo data

Next, download the following data file and put it in a data subdirectory of the directory contains the Docker Compose file:

Step 3: Run the demo driver script

Note: Python 3.11+ is needed to run the demo

It is recommended to use a virtual env to isolate the runtime environment for the demo

python3.11 -m venv demo
source demo/bin/activate

Now, simply run the demo script to try GraphRAG.

  ./graphrag_demo.sh

The script will:

  1. Check the environment
  2. Init TigerGraph schema and related queries needed
  3. Load the sample data
  4. Init the GraphRAG based on the graph and install required queries
  5. Ask a question via Python to get answer from GraphRAG

Go back to top

Detailed Data Ingestion Methods

For more examples of data ingestion, please follow the GraphRAG Demo Notebook

Go back to top

More Detailed Configurations

DB configuration

Copy the below into configs/server_config.json and edit the hostname and getToken fields to match your database's configuration. If token authentication is enabled in TigerGraph, set getToken to true. Set the timeout, memory threshold, and thread limit parameters as desired to control how much of the database's resources are consumed when answering a question.

{
    "db_config": {
        "hostname": "http://tigergraph",
        "restppPort": "9000",
        "gsPort": "14240",
        "getToken": false,
        "default_timeout": 300,
        "default_mem_threshold": 5000,
        "default_thread_limit": 8
    }
}

GraphRAG configuration

Copy the below code into configs/server_config.json. You shouldn’t need to change anything unless you change the port of the chat history service in the Docker Compose file.

reuse_embedding to true will skip re-generating the embedding if it already exists. ecc and chat_history_api are the addresses of internal components of GraphRAG.If you use the Docker Compose file as is, you don’t need to change them.

{
    "graphrag_config": {
        "reuse_embedding": false,
        "ecc": "http://eventual-consistency-service:8001",
        "chat_history_api": "http://chat-history:8002"
    }
}

Chat configuration

Copy the below code into configs/server_config.json. You shouldn’t need to change anything unless you change the port of the chat history service in the Docker Compose file.

{
    "chat-history": {
        "apiPort":"8002",
        "dbPath": "chats.db",
        "dbLogPath": "db.log",
        "logPath": "requestLogs.jsonl",
        "conversationAccessRoles": ["superuser", "globaldesigner"]
    }
}

Go back to top

LLM provider configuration

In the llm_config section of configs/server_config.json file, copy JSON config template from below for your LLM provider, and fill out the appropriate fields. Only one provider is needed.

OpenAI

In addition to the OPENAI_API_KEY, llm_model and model_name can be edited to match your specific configuration details.

{
    "llm_config": {
        "embedding_service": {
            "embedding_model_service": "openai",
            "model_name": "text-embedding-3-small",
            "authentication_configuration": {
                "OPENAI_API_KEY": "YOUR_OPENAI_API_KEY_HERE"
            }
        },
        "completion_service": {
            "llm_service": "openai",
            "llm_model": "gpt-4.1-mini",
            "authentication_configuration": {
                "OPENAI_API_KEY": "YOUR_OPENAI_API_KEY_HERE"
            },
            "model_kwargs": {
                "temperature": 0
            },
            "prompt_path": "./app/prompts/openai_gpt4/"
        }
    }
}

Google GenAI

Get your Gemini API key via https://aistudio.google.com/app/apikey.

{
    "llm_config": {
        "embedding_service": {
            "embedding_model_service": "genai",
            "model_name": "models/gemini-embedding-exp-03-07",
            "dimensions": 1536,
            "authentication_configuration": {
                "GOOGLE_API_KEY": "YOUR_GOOGLE_API_KEY_HERE"
            }
        },
        "completion_service": {
            "llm_service": "genai",
            "llm_model": "gemini-2.5-flash",
            "authentication_configuration": {
                "GOOGLE_API_KEY": "YOUR_GOOGLE_API_KEY_HERE"
            },
            "model_kwargs": {
                "temperature": 0
            },
            "prompt_path": "./common/prompts/google_gemini/"
        }
    }
}

GCP VertexAI

Follow the GCP authentication information found here: https://cloud.google.com/docs/authentication/application-default-credentials#GAC and create a Service Account with VertexAI credentials. Then add the following to the docker run command:

-v $(pwd)/configs/SERVICE_ACCOUNT_CREDS.json:/SERVICE_ACCOUNT_CREDS.json -e GOOGLE_APPLICATION_CREDENTIALS=/SERVICE_ACCOUNT_CREDS.json

And your JSON config should follow as:

{
    "llm_config": {
        "embedding_service": {
            "embedding_model_service": "vertexai",
            "model_name": "GCP-text-bison",
            "authentication_configuration": {}
        },
        "completion_service": {
            "llm_service": "vertexai",
            "llm_model": "text-bison",
            "model_kwargs": {
                "temperature": 0
            },
            "prompt_path": "./app/prompts/gcp_vertexai_palm/"
        }
    }
}

Azure

In addition to the AZURE_OPENAI_ENDPOINT, AZURE_OPENAI_API_KEY, and azure_deployment, llm_model and model_name can be edited to match your specific configuration details.

{
    "llm_config": {
        "embedding_service": {
            "embedding_model_service": "azure",
            "model_name": "GPT35Turbo",
            "azure_deployment":"YOUR_EMBEDDING_DEPLOYMENT_HERE",
            "authentication_configuration": {
                "OPENAI_API_TYPE": "azure",
                "OPENAI_API_VERSION": "2022-12-01",
                "AZURE_OPENAI_ENDPOINT": "YOUR_AZURE_ENDPOINT_HERE",
                "AZURE_OPENAI_API_KEY": "YOUR_AZURE_API_KEY_HERE"
            }
        },
        "completion_service": {
            "llm_service": "azure",
            "azure_deployment": "YOUR_COMPLETION_DEPLOYMENT_HERE",
            "openai_api_version": "2023-07-01-preview",
            "llm_model": "gpt-35-turbo-instruct",
            "authentication_configuration": {
                "OPENAI_API_TYPE": "azure",
                "AZURE_OPENAI_ENDPOINT": "YOUR_AZURE_ENDPOINT_HERE",
                "AZURE_OPENAI_API_KEY": "YOUR_AZURE_API_KEY_HERE"
            },
            "model_kwargs": {
                "temperature": 0
            },
            "prompt_path": "./app/prompts/azure_open_ai_gpt35_turbo_instruct/"
        }
    }
}

AWS Bedrock

{
    "llm_config": {
        "embedding_service": {
            "embedding_model_service": "bedrock",
            "model_name":"amazon.titan-embed-text-v2",
            "region_name":"us-west-2",
            "authentication_configuration": {
                "AWS_ACCESS_KEY_ID": "ACCESS_KEY",
                "AWS_SECRET_ACCESS_KEY": "SECRET"
            }
        },
        "completion_service": {
            "llm_service": "bedrock",
            "llm_model": "us.anthropic.claude-3-7-sonnet-20250219-v1:0",
            "region_name":"us-west-2",
            "authentication_configuration": {
                "AWS_ACCESS_KEY_ID": "ACCESS_KEY",
                "AWS_SECRET_ACCESS_KEY": "SECRET"
            },
            "model_kwargs": {
                "temperature": 0,
            },
            "prompt_path": "./app/prompts/aws_bedrock_claude3haiku/"
        }
    }
}

Ollama

{
    "llm_config": {
        "embedding_service": {
            "embedding_model_service": "ollama",
            "base_url": "http://ollama:11434",
            "model_name": "nomic-embed-text",
            "dimensions": 768,
            "authentication_configuration": {
            }
        },
        "completion_service": {
            "llm_service": "ollama",
            "base_url": "http://ollama:11434",
            "llm_model": "calebfahlgren/natural-functions",
            "model_kwargs": {
                "temperature": 0.0000001
            },
            "prompt_path": "./app/prompts/openai_gpt4/"
        }
    }
}

Hugging Face

Example configuration for a model on Hugging Face with a dedicated endpoint is shown below. Please specify your configuration details:

{
    "llm_config": {
        "embedding_service": {
            "embedding_model_service": "openai",
            "model_name": "llama3-8b",
            "authentication_configuration": {
                "OPENAI_API_KEY": ""
            }
        },
        "completion_service": {
            "llm_service": "huggingface",
            "llm_model": "hermes-2-pro-llama-3-8b-lpt",
            "endpoint_url": "https:endpoints.huggingface.cloud",
            "authentication_configuration": {
                "HUGGINGFACEHUB_API_TOKEN": ""
            },
            "model_kwargs": {
                "temperature": 0.1
            },
            "prompt_path": "./app/prompts/openai_gpt4/"
        }
    }
}

Example configuration for a model on Hugging Face with a serverless endpoint is shown below. Please specify your configuration details:

{
    "llm_config": {
        "embedding_service": {
            "embedding_model_service": "openai",
            "model_name": "Llama3-70b",
            "authentication_configuration": {
                "OPENAI_API_KEY": ""
            }
        },
        "completion_service": {
            "llm_service": "huggingface",
            "llm_model": "meta-llama/Meta-Llama-3-70B-Instruct",
            "authentication_configuration": {
                "HUGGINGFACEHUB_API_TOKEN": ""
            },
            "model_kwargs": {
                "temperature": 0.1
            },
            "prompt_path": "./app/prompts/llama_70b/"
        }
    }
}

Groq

{
    "llm_config": {
        "embedding_service": {
            "embedding_model_service": "openai",
            "model_name": "mixtral-8x7b-32768",
            "authentication_configuration": {
                "OPENAI_API_KEY": ""
            }
        },
        "completion_service": {
            "llm_service": "groq",
            "llm_model": "mixtral-8x7b-32768",
            "authentication_configuration": {
                "GROQ_API_KEY": ""
            },
            "model_kwargs": {
                "temperature": 0.1
            },
            "prompt_path": "./app/prompts/openai_gpt4/"
        }
    }
}

Go back to top

Customization and Extensibility

TigerGraph GraphRAG is designed to be easily extensible. The service can be configured to use different LLM providers, different graph schemas, and different LangChain tools. The service can also be extended to use different embedding services, different LLM generation services, and different LangChain tools. For more information on how to extend the service, see the Developer Guide.

Test Your Code Changes

A family of tests are included under the tests directory. If you would like to add more tests please refer to the guide here. A shell script run_tests.sh is also included in the folder which is the driver for running the tests. The easiest way to use this script is to execute it in the Docker Container for testing.

Testing with Pytest

You can run testing for each service by going to the top level of the service's directory and running python -m pytest

e.g. (from the top level)

cd graphrag
python -m pytest
cd ..

Test Code Change in Docker Container

First, make sure that all your LLM service provider configuration files are working properly. The configs will be mounted for the container to access. Also make sure that all the dependencies such as database are ready. If not, you can run the included docker compose file to create those services.

docker compose up -d --build

If you want to use Weights And Biases for logging the test results, your WandB API key needs to be set in an environment variable on the host machine.

export WANDB_API_KEY=KEY HERE

Then, you can build the docker container from the Dockerfile.tests file and run the test script in the container.

docker build -f Dockerfile.tests -t graphrag-tests:0.1 .

docker run -d -v $(pwd)/configs/:/ -e GOOGLE_APPLICATION_CREDENTIALS=/GOOGLE_SERVICE_ACCOUNT_CREDS.json -e WANDB_API_KEY=$WANDB_API_KEY -it --name graphrag-tests graphrag-tests:0.1


docker exec graphrag-tests bash -c "conda run --no-capture-output -n py39 ./run_tests.sh all all"

Test Script Options

To edit what tests are executed, one can pass arguments to the ./run_tests.sh script. Currently, one can configure what LLM service to use (defaults to all), what schemas to test against (defaults to all), and whether or not to use Weights and Biases for logging (defaults to true). Instructions of the options are found below:

Configure LLM Service

The first parameter to run_tests.sh is what LLMs to test against. Defaults to all. The options are:

  • all - run tests against all LLMs
  • azure_gpt35 - run tests against GPT-3.5 hosted on Azure
  • openai_gpt35 - run tests against GPT-3.5 hosted on OpenAI
  • openai_gpt4 - run tests on GPT-4 hosted on OpenAI
  • gcp_textbison - run tests on text-bison hosted on GCP

Configure Testing Graphs

The second parameter to run_tests.sh is what graphs to test against. Defaults to all. The options are:

  • all - run tests against all available graphs
  • OGB_MAG - The academic paper dataset provided by: https://ogb.stanford.edu/docs/nodeprop/#ogbn-mag.
  • DigtialInfra - Digital infrastructure digital twin dataset
  • Synthea - Synthetic health dataset

Configure Weights and Biases

If you wish to log the test results to Weights and Biases (and have the correct credentials setup above), the final parameter to run_tests.sh automatically defaults to true. If you wish to disable Weights and Biases logging, use false.

About

No description, website, or topics provided.

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •