openvoice_realtime_conversation.mp4
| 
 🛡️ Wiki  | 
|||
Show your support for this project by choosing one of the following options for donations.
- Crypto: 0x02030569e866e22C9991f55Db0445eeAd2d646c8
 - Github Sponsors: https://github.com/sponsors/w4ffl35
 - Patreon: https://www.patreon.com/c/w4ffl35
 
✉️ Get notified when the packaged version releases
| ✨ Key Features | 
|---|
| 🗣️ Real-time conversations | 
| - Three speech engines: espeak, SpeechT5, OpenVoice - Auto language detection (OpenVoice) - Real-time voice-chat with LLMs  | 
| 🤖 Customizable AI Agents | 
| - Custom agent names, moods, personalities - Retrieval-Augmented Generation (RAG) - Create AI personalities and moods  | 
| 📚 Enhanced Knowledge Retrieval | 
| - RAG for documents/websites - Use local data to enrich chat  | 
| 🖼️ Image Generation & Manipulation | 
| - Text-to-Image (Stable Diffusion 1.5, SDXL, Turbo) - Drawing tools & ControlNet - LoRA & Embeddings - Inpainting, outpainting, filters  | 
| 🌍 Multi-lingual Capabilities | 
| - Partial multi-lingual TTS/STT/interface - English & Japanese GUI  | 
| 🔒 Privacy and Security | 
| - Runs locally, no external API (default) - Customizable LLM guardrails & image safety - Disables HuggingFace telemetry - Restricts network access  | 
| ⚡ Performance & Utility | 
| - Fast generation (~2s on RTX 2080s) - Docker-based setup & GPU acceleration - Theming (Light/Dark/System) - NSFW toggles - Extension API - Python library & API support  | 
| Language | TTS | LLM | STT | GUI | 
|---|---|---|---|---|
| English | ✅ | ✅ | ✅ | ✅ | 
| Japanese | ✅ | ✅ | ❌ | ✅ | 
| Spanish | ✅ | ✅ | ❌ | ❌ | 
| French | ✅ | ✅ | ❌ | ❌ | 
| Chinese | ✅ | ✅ | ❌ | ❌ | 
| Korean | ✅ | ✅ | ❌ | ❌ | 
AI Runner is a powerful tool designed for local, private use. However, its capabilities mean that users must be aware of their responsibilities under emerging AI regulations. This section provides information regarding the Colorado AI Act.
As the developer of AI Runner, we have a duty of care to inform our users about how this law may apply to them.
- Your Role as a User: If you use AI Runner to make, or as a substantial factor in making, an important decision that has a legal or similarly significant effect on someone's life, you may be considered a "deployer" of a "high-risk AI system" under Colorado law.
 - What is a "High-Risk" Use Case? Examples of high-risk decisions include using AI to screen job applicants, evaluate eligibility for loans, housing, insurance, or other essential services.
 - User Responsibility: Given AI Runner's customizable nature (e.g., using RAG with personal or business documents), it is possible to configure it for such high-risk purposes. If you do so, you are responsible for complying with the obligations of a "deployer," which include performing impact assessments and preventing algorithmic discrimination.
 - Our Commitment: We are committed to developing AI Runner responsibly. The built-in privacy features, local-first design, and configurable guardrails are intended to provide you with the tools to use AI safely. We strongly encourage you to understand the capabilities and limitations of the AI models you choose to use and to consider the ethical implications of your specific application.
 
For more information, we recommend reviewing the text of the Colorado AI Act.
| Specification | Minimum | Recommended | 
|---|---|---|
| OS | Ubuntu 22.04, Windows 10 | Ubuntu 22.04 (Wayland) | 
| CPU | Ryzen 2700K or Intel Core i7-8700K | Ryzen 5800X or Intel Core i7-11700K | 
| Memory | 16 GB RAM | 32 GB RAM | 
| GPU | NVIDIA RTX 3060 or better | NVIDIA RTX 4090 or better | 
| Network | Broadband (used to download models) | Broadband (used to download models) | 
| Storage | 22 GB (with models), 6 GB (without models) | 100 GB or higher | 
- Install system requirements
sudo apt update && sudo apt upgrade -y sudo apt install -y make build-essential libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev xz-utils tk-dev libffi-dev liblzma-dev python3-openssl git nvidia-cuda-toolkit pipewire libportaudio2 libxcb-cursor0 gnupg gpg-agent pinentry-curses espeak xclip cmake qt6-qpa-plugins qt6-wayland qt6-gtk-platformtheme mecab libmecab-dev mecab-ipadic-utf8 libxslt-dev mkcert sudo apt install espeak sudo apt install espeak-ng-espeak - Create 
airunnerdirectorysudo mkdir ~/.local/share/airunner sudo chown $USER:$USER ~/.local/share/airunner
 - Install AI Runner - Python 3.13+ required 
pyenvandvenvare recommended (see wiki for more info)pip install "typing-extensions==4.13.2" pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128 pip install airunner[all_dev] - Run AI Runner
airunner
 
For more options, including Docker, see the Installation Wiki.
- Run AI Runner: 
airunner - Run the downloader: 
airunner-setup - Build templates: 
airunner-build-ui 
| 
 These are the sizes of the optional models that power AI Runner. 
 AI Runner uses the following stack 
  | 
 By default, AI Runner installs essential TTS/STT and minimal LLM components, but AI art models must be supplied by the user. Organize them under your local AI Runner data directory: 
  | 
- The chatbot's mood and conversation summary system is always enabled by default. The bot's mood and emoji are shown with each bot message.
 - When the LLM is updating the bot's mood or summarizing the conversation, a loading spinner and status message are shown in the chat prompt widget. The indicator disappears as soon as a new message arrives.
 - This system is automatic and requires no user configuration.
 - For more details, see the LLM Chat Prompt Widget README.
 - The mood and summary engines are now fully integrated into the agent runtime. When the agent updates mood or summarizes the conversation, it emits a signal to the UI with a customizable loading message. The chat prompt widget displays this message as a loading indicator.
 - See 
src/airunner/handlers/llm/agent/agents/base.pyfor integration details andsrc/airunner/api/chatbot_services.pyfor the API function. 
AI Runner includes an Aggregated Search Tool for querying multiple online services from a unified interface. This tool is available as a NodeGraphQt node, an LLM agent tool, and as a Python API.
Supported Search Services:
- DuckDuckGo (no API key required)
 - Wikipedia (no API key required)
 - arXiv (no API key required)
 - Google Custom Search (requires 
GOOGLE_API_KEYandGOOGLE_CSE_ID) - Bing Web Search (requires 
BING_SUBSCRIPTION_KEY) - NewsAPI (requires 
NEWSAPI_KEY) - StackExchange (optional 
STACKEXCHANGE_KEYfor higher quota) - GitHub Repositories (optional 
GITHUB_TOKENfor higher rate limits) - OpenLibrary (no API key required)
 
API Key Setup:
- Set the required API keys as environment variables before running AI Runner. Only services with valid keys will be queried.
 - Example:
export GOOGLE_API_KEY=your_google_api_key export GOOGLE_CSE_ID=your_google_cse_id export BING_SUBSCRIPTION_KEY=your_bing_key export NEWSAPI_KEY=your_newsapi_key export STACKEXCHANGE_KEY=your_stackexchange_key export GITHUB_TOKEN=your_github_token
 
Usage:
- Use the Aggregated Search node in NodeGraphQt for visual workflows.
 - Call the tool from LLM agents or Python code:
from airunner.components.tools import AggregatedSearchTool results = await AggregatedSearchTool.aggregated_search("python", category="web")
 - See 
src/airunner/tools/README.mdfor more details. 
Note:
- DuckDuckGo, Wikipedia, arXiv, and OpenLibrary do not require API keys and can be used out-of-the-box.
 - For best results and full service coverage, configure all relevant API keys.
 
AI Runner's local server enforces HTTPS-only operation for all local resources. HTTP is never used or allowed for local static assets or API endpoints. At startup, the server logs explicit details about HTTPS mode and the certificate/key in use. Security headers are set and only GET/HEAD methods are allowed for further hardening.
- 
Automatic Certificate Generation (Recommended):
- By default, AI Runner will auto-generate a self-signed certificate in 
~/.local/share/airunner/certs/if one does not exist. No manual steps are required for most users. - If you want to provide your own certificate, place 
cert.pemandkey.pemin thecertsdirectory under your AI Runner base path. 
 - By default, AI Runner will auto-generate a self-signed certificate in 
 - 
Manual Certificate Generation (Optional):
- You can manually generate a self-signed certificate with:
airunner-generate-cert
 - This will create 
cert.pemandkey.pemin your current directory. Move them to your AI Runner certs directory if you want to use them. 
 - You can manually generate a self-signed certificate with:
 - 
Configure AI Runner to Use SSL:
- The app will automatically use the certificates in the certs directory. If you want to override, set the environment variables:
export AIRUNNER_SSL_CERT=~/path/to/cert.pem export AIRUNNER_SSL_KEY=~/path/to/key.pem airunner
 - The server will use HTTPS if both files are provided.
 
 - The app will automatically use the certificates in the certs directory. If you want to override, set the environment variables:
 - 
Access the App via
https://localhost:<port>- The default port is 5005 (configurable in 
src/airunner/settings.py). - Your browser may warn about the self-signed certificate; you can safely bypass this for local development.
 
 - The default port is 5005 (configurable in 
 
- For production or remote access, use a certificate from a trusted CA.
 - Never share your private key (
key.pem). - The server only binds to 
127.0.0.1by default for safety. - For additional hardening, see the Security guide and the code comments in 
local_http_server.py. 
You can generate a self-signed SSL certificate for local HTTPS with a single command:
airunner-generate-certThis will create cert.pem and key.pem in your current directory. Use these files with the local HTTP server as described above.
See the SSL/TLS section for full details.
- For a browser-trusted local HTTPS experience (no warnings), install mkcert:
# On Ubuntu/Debian: sudo apt install libnss3-tools brew install mkcert # (on macOS, or use your package manager) mkcert -install
 - If 
mkcertis not installed, AI Runner will fall back to OpenSSL self-signed certificates, which will show browser warnings. - See the SSL/TLS section for details.
 
AI Runner provides several CLI commands for development, testing, and maintenance. Below is a summary of all available commands:
| Command | Description | 
|---|---|
airunner | 
Launch the AI Runner application GUI. | 
airunner-setup | 
Download and set up required models and data. | 
airunner-build-ui | 
Regenerate Python UI files from .ui templates. Run after editing any .ui file. | 
airunner-compile-translations | 
Compile translation files for internationalization. | 
airunner-tests | 
Run the full test suite using pytest. | 
airunner-test-coverage-report | 
Generate a test coverage report. | 
airunner-docker | 
Run Docker-related build and management commands for AI Runner. | 
airunner-generate-migration | 
Generate a new Alembic database migration. | 
airunner-generate-cert | 
Generate a self-signed SSL certificate for local HTTPS. | 
airunner-mypy <filename> | 
Run mypy type checking on a file with project-recommended flags. | 
Usage Examples:
# Launch the app
airunner
# Download models and set up data
airunner-setup
# Build UI Python files from .ui templates
airunner-build-ui
# Compile translation files
airunner-compile-translations
# Run all tests
airunner-tests
# Generate a test coverage report
airunner-test-coverage-report
# Run Docker build or management tasks
airunner-docker
# Generate a new Alembic migration
airunner-generate-migration
# Generate a self-signed SSL certificate
airunner-generate-cert
# Run mypy type checking on a file
airunner-mypy src/airunner/components/document_editor/gui/widgets/document_editor_widget.pyFor more details on each command, see the Wiki or run the command with --help if supported.
AI Runner supports a set of powerful chat slash commands, known as Slash Tools, that let you quickly trigger special actions, tools, or workflows directly from the chat prompt. These commands start with a / and can be used in any chat conversation.
- Type 
/in the chat prompt to see available commands (autocomplete is supported in the UI). - Each slash command maps to a specific tool, agent action, or workflow.
 - The set of available commands is extensible and may include custom or extension-provided tools.
 
| Slash | Command | Action Type | Description | 
|---|---|---|---|
/a | 
Image | GENERATE_IMAGE | Generate an image from a prompt | 
/c | 
Code | CODE | Run or generate code (if supported) | 
/s | 
Search | SEARCH | Search the web or knowledge base | 
/w | 
Workflow | WORKFLOW | Run a custom workflow (if supported) | 
Note:
- Some slash tools (like 
/afor image) return an immediate confirmation message (e.g., "Ok, I've navigated to ...", "Ok, generating your image..."). - Others (like 
/sfor search or/wfor workflow) do not return a direct message, but instead show a loading indicator until the result is ready. - The set of available slash commands is defined in 
SLASH_COMMANDSinsrc/airunner/settings.pyand may be extended in the future. 
For a full list of supported slash commands, type /help in the chat prompt or see the copilot-instructions.md.
We welcome pull requests for new features, bug fixes, or documentation improvements. You can also build and share extensions to expand AI Runner’s functionality. For details, see the Extensions Wiki.
Take a look at the Contributing document and the Development wiki page for detailed instructions.
AI Runner uses pytest for all automated testing. Test coverage is a priority, especially for utility modules.
- Headless-safe tests:
- Located in 
src/airunner/utils/tests/ - Can be run in any environment (including CI, headless servers, and developer machines)
 - Run with:
pytest src/airunner/utils/tests/
 
 - Located in 
 - Display-required (Qt/Xvfb) tests:
- Located in 
src/airunner/utils/tests/xvfb_required/ - Require a real Qt display environment (cannot be run headlessly or with 
pytest-qt) - Typical for low-level Qt worker/signal/slot logic
 - Run with:
xvfb-run -a pytest src/airunner/utils/tests/xvfb_required/ # Or for a single file: xvfb-run -a pytest src/airunner/utils/tests/xvfb_required/test_background_worker.py - See the README in xvfb_required/ for details.
 
 - Located in 
 
- By default, only headless-safe tests are run in CI.
 - Display-required tests are intended for manual or special-case runs (e.g., when working on Qt threading or background worker code).
 - (Optional) You may automate this split in CI by adding a separate job/step for xvfb tests.
 
- All new utility code must be accompanied by tests.
 - Use 
pytest,pytest-qt(for GUI), andunittest.mockfor mocking dependencies. - For more details on writing and organizing tests, see the project coding guidelines and the 
src/airunner/utils/tests/folder. 
- Follow the copilot-instructions.md for all development, testing, and contribution guidelines.
 - Always use the 
airunnercommand in the terminal to run the application. - Always run tests in the terminal (not in the workspace test runner).
 - Use 
pytestandpytest-covfor running tests and checking coverage. - UI changes must be made in 
.uifiles and rebuilt withairunner-build-ui. 
- See the Wiki for architecture, usage, and advanced topics.
 
- API Service Layer
 - Main Window Model Load Balancer
 - Facehugger Shield Suite
 - NodeGraphQt Vendor Module
 - Xvfb-Required Tests
 - ORM Models
 
For additional details, see the Wiki.
If you find this project useful, please consider sponsoring its development. Your support helps cover the costs of infrastructure, development, and maintenance.
You can sponsor the project on GitHub Sponsors.
Thank you for your support!


