This repository offers both Dockerized and local proxy solutions for accessing any API, with specialized support for popular interfaces like the OpenAI API. It enables simplified and streamlined interactions with various LLMs.
With the Docker image, you can easily deploy a proxy instance to serve as a gateway between your application and the OpenAI API, reducing the complexity of API interactions and enabling more efficient development.
For users who are restricted from direct access to the OpenAI API, particularly those in countries where OpenAI will be blocking API access starting July 2024.
For users who need to access private APIs that lack Cross-Origin Resource Sharing (CORS) headers, this solution provides a proxy to bypass CORS restrictions and enable seamless API interactions.
Bypass client-side security checks, such as enterprise internal self-signed TLS certificates that cannot directly pass TLS certificate validation in many commonly used libraries.
Specify different Host headers than the URL itself. For some custom hosts, frontend projects cannot directly modify the Host header, requiring a proxy to separately define the URL and Host header parameters.
-
API demo https://api.aiql.com
-
UI demo ChatUI
-
OpenAI API Reference (official docs)
-
RESTful OpenAPI (provided by AIQL)
Execute this command to start the proxy with default settings:
sudo docker run -d -p 9017:9017 aiql/api-proxy:latestThen, you can access it by YOURIP:9017
For example, the proxied OpenAI Chat Completion API will be:
YOURIP:9017/v1/chat/completionsIt should be the same as
api.openai.com/v1/chat/completions
You can change default port and default target by setting -e in docker, which means that you can use it for any backend followed by OpenAPI format:
| Parameter | Env Var | Default Value | Description |
|---|---|---|---|
| port | PORT | 9017 | Server port number (valid range: 1-65535) |
| target | TARGET | https://api.openai.com | Target URL or API endpoint to connect to |
| host | HOST | N/A (Inherited from the target URL) | Host header specifying the domain name |
| secure | SECURE | true | Enables security features, such as TLS certificate validation |
Execute this command to start the proxy with default settings:
npx @ai-ql/api-proxyTo skip installation prompts and specify parameters:
npx -y @ai-ql/api-proxy --target="https://api.deepinfra.com/v1/openai" --port="9019"Click below to use the GitHub Codespace:
Or fork this repo and create a codespace manually:
- Wait for env ready in your browser
npm install cinpm start
And then, the codespace will provide a forward port (default 9017) for you to check the running.
If everything is OK, check the docker by:
docker build .
If you want to maintaine your own docker image, refer to github Actions
Fork this repo and set DOCKERHUB_USERNAME and DOCKERHUB_TOKEN in your secrets
Normally, the step should be:
- Fork this repo
- Settings → Secrets and variables → Actions → New repository secret
You can apply this approach to other APIs, such as Nvidia NIM:
- The proxied Nvidia NIM Completion API will be:
YOURIP:9101/v1/chat/completionsFor convenience, a readily available API is provided for those who prefer not to deploy it independently:
https://nvidia.aiql.com/v1/chat/completions
services:
nvidia-proxy:
image: aiql/api-proxy:latest
container_name: nvidia-proxy
environment:
PORT: "9101"
TARGET: "https://integrate.api.nvidia.com"
restart: always
network_mode: hostYou can apply this approach with your own domain over HTTPS:
YOUREMAILADDR@example.comwill be used to get certification notification from ACME server- The proxied OpenAI Chat Completion API will be:
api.example.com/v1/chat/completionsapi.example.comshould be replaced by your domain name
services:
nginx-proxy:
image: nginxproxy/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
- "443:443/tcp"
- "443:443/udp"
environment:
ENABLE_HTTP3: "true"
volumes:
- conf:/etc/nginx/conf.d
- vhost:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: always
network_mode: bridge
acme-companion:
image: nginxproxy/acme-companion
container_name: nginx-proxy-acme
environment:
- DEFAULT_EMAIL=YOUREMAILADDR@example.com
volumes_from:
- nginx-proxy
volumes:
- certs:/etc/nginx/certs:rw
- acme:/etc/acme.sh
- /var/run/docker.sock:/var/run/docker.sock:ro
network_mode: bridge
api-proxy:
image: aiql/api-proxy:latest
container_name: api-proxy
environment:
LETSENCRYPT_HOST: api.example.com
VIRTUAL_HOST: api.example.com
VIRTUAL_PORT: "9017"
network_mode: host
depends_on:
- "nginx-proxy"
volumes:
conf:
vhost:
html:
certs:
acme: