Models.dev is a comprehensive open-source database of AI model specifications, pricing, and capabilities.
There's no single database with information about all the available AI models. We started Models.dev as a community-contributed project to address this. We also use it internally in opencode.
You can access this data through an API.
curl https://models.dev/api.jsonUse the Model ID field to do a lookup on any model; it's the identifier used by AI SDK.
The data is stored in the repo as TOML files; organized by provider and model. This is used to generate this page and power the API.
We need your help keeping the data up to date.
To add a new model, start by checking if the provider already exists in the providers/ directory. If not, then:
If the AI provider doesn't already exist in the providers/ directory:
-
Create a new folder in
providers/with the provider's ID. For example,providers/newprovider/. -
Add a
provider.tomlfile with the provider information:name = "Provider Name"
Create a new TOML file in the provider's models/ directory where the filename is the model ID:
name = "Model Display Name"
attachment = true # or false - supports file attachments
reasoning = false # or true - supports reasoning / chain-of-thought
tool_call = true # or false - supports tool calling
temperature = true # or false - supports temperature control
knowledge = "2024-04" # Knowledge-cutoff date
release_date = "2025-02-19" # First public release date
last_updated = "2025-02-19" # Most recent update date
[cost]
input = 3.00 # Cost per million input tokens (USD)
output = 15.00 # Cost per million output tokens (USD)
cache_read = 0.30 # Cost per million cached read tokens (USD)
cache_write = 3.75 # Cost per million cached write tokens (USD)
[limit]
context = 200_000 # Maximum context window (tokens)
output = 8_192 # Maximum output tokens
[modalities]
input = ["text", "image"] # Supported input modalities
output = ["text"] # Supported output modalities- Fork this repo
- Create a new branch with your changes
- Add your provider and/or model files
- Open a PR with a clear description
There's a GitHub Action that will automatically validate your submission against our schema to ensure:
- All required fields are present
- Data types are correct
- Values are within acceptable ranges
- TOML syntax is valid
Models must conform to the following schema, as defined in app/schemas.ts.
Provider Schema:
name: String - Display name of the provider
Model Schema:
name: String — Display name of the modelattachment: Boolean — Supports file attachmentsreasoning: Boolean — Supports reasoning / chain-of-thoughttool_call: Boolean - Supports tool callingtemperature: Boolean — Supports temperature controlknowledge(optional): String — Knowledge-cutoff date inYYYY-MMorYYYY-MM-DDformatrelease_date: String — First public release date inYYYY-MMorYYYY-MM-DDlast_updated: String — Most recent update date inYYYY-MMorYYYY-MM-DDcost.input(optional): Number — Cost per million input tokens (USD)cost.output(optional): Number — Cost per million output tokens (USD)cost.cache_read(optional): Number — Cost per million cached read tokens (USD)cost.cache_write(optional): Number — Cost per million cached write tokens (USD)limit.context: Number — Maximum context window (tokens)limit.output: Number — Maximum output tokensmodalities.input: Array of strings — Supported input modalities (e.g., ["text", "image", "audio", "video", "pdf"])modalities.putput: Array of strings — Supported output modalities (e.g., ["text"])
See existing providers in the providers/ directory for reference:
providers/anthropic/- Anthropic Claude modelsproviders/openai/- OpenAI GPT modelsproviders/google/- Google Gemini models
Make sure you have Bun installed.
$ bun install
$ cd packages/web
$ bun run devAnd it'll open the frontend at http://localhost:3000
Open an issue if you need help or have questions about contributing.
Models.dev is created by the maintainers of SST.