Skip to content

A unified interface for multiple LLM providers with automatic fallback, ensuring your AI applications remain resilient. This project features an OpenAI-compatible server for seamless integration, tool support for enhanced functionality. It supports a wide range of major providers

Notifications You must be signed in to change notification settings

mlibre/Unified-AI-Router

Repository files navigation

Unified AI Router

Unified AI Router is a comprehensive toolkit for AI applications, featuring:

  • An OpenAI-compatible server for seamless API integration
  • A unified interface for multiple LLM providers with automatic fallback

It supports all the OpenAI-compatible servers, including major providers like OpenAI, Google, Grok, Litellm, Vllm, Ollama and more, ensuring reliability and flexibility.

🚀 Features

  • Multi-Provider Support: Works with OpenAI, Google, Grok, OpenRouter, Z.ai, Qroq, Cohere, Cerebras, LLM7 and etc
  • Automatic Fallback: If one provider fails for any reason, automatically tries the next
  • Circuit Breaker: Built-in fault tolerance with automatic circuit breaking for each provider to prevent cascading failures
  • OpenAI-Compatible Server: Drop-in replacement for the OpenAI API, enabling easy integration with existing tools and clients
  • Simple API: Easy-to-use interface for all supported providers
  • Streaming and Non-Streaming Support: Handles both streaming and non-streaming responses
  • Tool Calling: Full support for tools in LLM interactions

🛠️ Installation

npm i unified-ai-router
# OR
git clone https://github.com/mlibre/Unified-AI-Router
cd Unified-AI-Router
npm i

📖 Usage

📚 Basic Library Usage

This is the core AIRouter library - a JavaScript class that provides a unified interface for multiple LLM providers.

const AIRouter = require("unified-ai-router");
require("dotenv").config();

const providers = [
  {
    name: "openai",
    apiKey: process.env.OPENAI_API_KEY,
    model: "gpt-4",
    apiUrl: "https://api.openai.com/v1"
  },
  {
    name: "google",
    apiKey: process.env.GEMINI_API_KEY,
    model: "gemini-2.5-pro",
    apiUrl: "https://generativelanguage.googleapis.com/v1beta/openai/"
  }
];

const llm = new AIRouter(providers);

const messages = [
  { role: "system", content: "You are a helpful assistant." },
  { role: "user", content: "Explain quantum computing in simple terms." }
];

const response = await llm.chatCompletion(messages, {
  temperature: 0.7
});

console.log(response);

You can also provide an array of API keys for a single provider definition.

const providers = [
  {
    name: "openai",
    apiKey: [process.env.OPENAI_API_KEY_1, process.env.OPENAI_API_KEY_2],
    model: "gpt-4",
    apiUrl: "https://api.openai.com/v1"
  }
];

🔌 OpenAI-Compatible Server

The OpenAI-compatible server provides a drop-in replacement for the OpenAI API. It routes requests through the unified router with fallback logic, ensuring high availability.
The server uses the provider configurations defined in provider.js file, and requires API keys set in a .env file.

  1. Copy the example environment file:

    cp .env.example .env
  2. Edit .env and add your API keys for the desired providers (see 🔑 API Keys for sources).

  3. Configure your providers in provider.js. Add new provider or modify existing ones with the appropriate name, apiKey, model, and apiUrl for the providers you want to use.

To start the server locally, run:

npm start

The server listens at http://localhost:3000/ and supports the following OpenAI-compatible endpoints:

  • POST /v1/chat/completions - Chat completions (streaming and non-streaming)
  • POST /chat/completions - Chat completions (streaming and non-streaming)
  • GET /v1/models - List available models
  • GET /models - List available models
  • GET /health - Health check
  • GET /v1/providers/status - Check the status of all configured providers

🧪 Testing

The project includes tests for the core library and the OpenAI-compatible server. To run the tests, use the following commands:

# Test chat completion
node tests/chat.js

# Test OpenAI server non-streaming
node tests/openai-server-non-stream.js

# Test OpenAI server streaming
node tests/openai-server-stream.js

# Test tool usage
node tests/tools.js

🌐 Deploying to Render.com

Ensure provider.js is configured with API keys in .env (as above). Push to GitHub, then:

  1. Dashboard:

    • Create Web Service on Render.com, connect repo.
    • Build Command: npm install
    • Start Command: npm start
    • Add env vars (e.g., OPENAI_API_KEY=sk-...).
    • Deploy.
  2. CLI:

    curl -fsSL https://raw.githubusercontent.com/render-oss/cli/refs/heads/main/bin/install.sh | sh
    render login
    render services
    render deploys create srv-d3f7iqmmcj7s73e67feg --commit HEAD --confirm --output text
  3. Verify:

    • Access https://your-service.onrender.com/models.

See Render docs for details.

🔧 Supported Providers

  • OpenAI
  • Google Gemini
  • Grok
  • OpenRouter
  • Z.ai
  • Qroq
  • Cohere
  • Cerebras
  • LLM7
  • Any Other OpenAI Compatible Server

🔑 API Keys

Get your API keys from the following providers:

📁 Project Structure

  • main.js - Core AIRouter library implementing the unified interface and fallback logic
  • provider.js - Configuration for supported AI providers
  • openai-server.js - OpenAI-compatible API server
  • tests/ - Comprehensive tests for the library, server, and tools

📄 License

MIT

About

A unified interface for multiple LLM providers with automatic fallback, ensuring your AI applications remain resilient. This project features an OpenAI-compatible server for seamless integration, tool support for enhanced functionality. It supports a wide range of major providers

Topics

Resources

Stars

Watchers

Forks