Skip to content
This repository was archived by the owner on Aug 23, 2025. It is now read-only.
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions .github/workflows/publish.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,23 +12,23 @@ jobs:
- uses: actions/checkout@v3
with:
fetch-depth: 0

- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
registry-url: 'https://registry.npmjs.org/'
cache: 'npm'

- name: Install dependencies
run: npm ci

- name: Run tests
run: npm test

- name: Run typecheck
run: npm run typecheck

- name: Publish
if: startsWith(github.ref, 'refs/tags/v')
env:
Expand All @@ -40,4 +40,4 @@ jobs:
echo "GITHUB_REF->"$GITHUB_REF
# test tag signature
git tag -v $(git describe --tags --abbrev=0)
npm publish
npm publish
14 changes: 7 additions & 7 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,27 +2,27 @@ name: Test and Typecheck

on:
push:
branches: [ main ]
branches: [main]
pull_request:
branches: [ main ]
branches: [main]

jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3

- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
cache: 'npm'

- name: Install dependencies
run: npm ci

- name: Run tests
run: npm test

- name: Run typecheck
run: npm run typecheck
run: npm run typecheck
12 changes: 10 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,15 @@
## 0.10.0 (2025-05-13)

### Changes

- Renamed `callAI` to `callAi` for consistent casing across the library
- Added backward compatibility export to support existing code using `callAI` spelling
- Added tests to ensure backward compatibility works correctly


## 0.5.0 (2024-06-28)

### Features

- Added comprehensive multi-model support for structured JSON output
- Implemented model-specific strategies for different AI providers:
- OpenAI/GPT models use native JSON schema support
Expand All @@ -25,6 +26,7 @@
## 0.4.1 (2024-06-22)

### Fixes

- Improved error handling for both streaming and non-streaming API calls
- Added better error response format consistency
- Addressed TypeScript type issues in tests
Expand All @@ -33,6 +35,7 @@
## 0.4.0 (2024-06-22)

### Features

- Added default "result" name for all JSON schemas
- Improved test coverage for schema name handling
- Enhanced documentation for schema name property
Expand All @@ -41,6 +44,7 @@
## 0.3.1 (2024-06-22)

### Improvements

- Added proper support for schema name property in OpenRouter JSON schemas
- Updated documentation to clarify that name is optional but supported
- Ensured examples in documentation consistently show name usage
Expand All @@ -49,6 +53,7 @@
## 0.3.0 (2024-06-22)

### Bug Fixes

- Fixed JSON schema structure for OpenRouter API integration
- Removed unnecessary nested `schema` object within the JSON schema
- Removed `provider.require_parameters` field which was causing issues
Expand All @@ -58,6 +63,7 @@
## 0.2.1 (2024-06-17)

### Improvements

- Enhanced schema handling to better support JSON schema definition
- Added test coverage for complex schema use cases
- Updated documentation with comprehensive examples for structured responses
Expand All @@ -66,12 +72,14 @@
## 0.2.0 (2024-06-16)

### Breaking Changes

- Simplified API by moving `schema` parameter into the options object
- Changed streaming to be explicitly opt-in (default is non-streaming)
- Updated return type to be `Promise<string>` for non-streaming and `AsyncGenerator` for streaming
- Removed need for `null` parameter when not using schema

### Improvements

- Improved TypeScript types and documentation
- Reduced code duplication by extracting common request preparation logic
- Enhanced error handling for both streaming and non-streaming modes
Expand All @@ -83,4 +91,4 @@
- Initial release
- Support for streaming responses
- JSON schema for structured output
- Compatible with OpenRouter and OpenAI API
- Compatible with OpenRouter and OpenAI API
118 changes: 60 additions & 58 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,22 +15,22 @@ pnpm add call-ai
## Usage

```typescript
import { callAi } from 'call-ai';
import { callAi } from "call-ai";

// Basic usage with string prompt (non-streaming by default)
const response = await callAi('Explain quantum computing in simple terms', {
apiKey: 'your-api-key',
model: 'gpt-4'
const response = await callAi("Explain quantum computing in simple terms", {
apiKey: "your-api-key",
model: "gpt-4",
});

// The response is the complete text
console.log(response);

// With streaming enabled (returns an AsyncGenerator)
const generator = callAi('Tell me a story', {
apiKey: 'your-api-key',
model: 'gpt-4',
stream: true
const generator = callAi("Tell me a story", {
apiKey: "your-api-key",
model: "gpt-4",
stream: true,
});

// Process streaming updates
Expand All @@ -40,13 +40,13 @@ for await (const chunk of generator) {

// Using message array for more control
const messages = [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Explain quantum computing in simple terms' }
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Explain quantum computing in simple terms" },
];

const response = await callAi(messages, {
apiKey: 'your-api-key',
model: 'gpt-4'
apiKey: "your-api-key",
model: "gpt-4",
});

console.log(response);
Expand All @@ -55,16 +55,16 @@ console.log(response);
const schema = {
name: "exercise_summary",
properties: {
title: { type: 'string' },
summary: { type: 'string' },
points: { type: 'array', items: { type: 'string' } }
title: { type: "string" },
summary: { type: "string" },
points: { type: "array", items: { type: "string" } },
},
required: ['title', 'summary']
required: ["title", "summary"],
};

const response = await callAi('Summarize the benefits of exercise', {
apiKey: 'your-api-key',
schema: schema
const response = await callAi("Summarize the benefits of exercise", {
apiKey: "your-api-key",
schema: schema,
});

const structuredOutput = JSON.parse(response);
Expand All @@ -73,24 +73,24 @@ console.log(structuredOutput.title);
// Streaming with schema for OpenRouter structured JSON output
const schema = {
properties: {
title: { type: 'string' },
items: {
type: 'array',
items: {
type: 'object',
title: { type: "string" },
items: {
type: "array",
items: {
type: "object",
properties: {
name: { type: 'string' },
description: { type: 'string' }
}
}
}
}
name: { type: "string" },
description: { type: "string" },
},
},
},
},
};

const generator = callAi('Create a list of sci-fi books', {
apiKey: 'your-api-key',
const generator = callAi("Create a list of sci-fi books", {
apiKey: "your-api-key",
stream: true,
schema: schema
schema: schema,
});

for await (const chunk of generator) {
Expand Down Expand Up @@ -124,20 +124,20 @@ Different LLMs have different strengths when working with structured data. Based

### Schema Complexity Guide

| Model Family | Grade | Simple Flat Schema | Complex Flat Schema | Nested Schema | Best For |
|--------------|-------|-------------------|---------------------|---------------|----------|
| OpenAI | A | ✅ Excellent | ✅ Excellent | ✅ Excellent | Most reliable for all schema types |
| Gemini | A | ✅ Excellent | ✅ Excellent | ✅ Good | Good all-around performance, especially with flat schemas |
| Claude | B | ✅ Excellent | ⚠️ Good (occasional JSON errors) | ✅ Good | Simple schemas, robust handling of complex prompts |
| Llama 3 | C | ✅ Good | ✅ Good | ❌ Poor | Simpler flat schemas, may struggle with nested structures |
| Deepseek | C | ✅ Good | ✅ Good | ❌ Poor | Basic flat schemas only |
| Model Family | Grade | Simple Flat Schema | Complex Flat Schema | Nested Schema | Best For |
| ------------ | ----- | ------------------ | -------------------------------- | ------------- | --------------------------------------------------------- |
| OpenAI | A | ✅ Excellent | ✅ Excellent | ✅ Excellent | Most reliable for all schema types |
| Gemini | A | ✅ Excellent | ✅ Excellent | ✅ Good | Good all-around performance, especially with flat schemas |
| Claude | B | ✅ Excellent | ⚠️ Good (occasional JSON errors) | ✅ Good | Simple schemas, robust handling of complex prompts |
| Llama 3 | C | ✅ Good | ✅ Good | ❌ Poor | Simpler flat schemas, may struggle with nested structures |
| Deepseek | C | ✅ Good | ✅ Good | ❌ Poor | Basic flat schemas only |

### Schema Structure Recommendations

1. **Flat schemas perform better across all models**. If you need maximum compatibility, avoid deeply nested structures.

2. **Field names matter**. Some models have preferences for certain property naming patterns:
- Use simple, common naming patterns like `name`, `type`, `items`, `price`
- Use simple, common naming patterns like `name`, `type`, `items`, `price`
- Avoid deeply nested object hierarchies (more than 2 levels deep)
- Keep array items simple (strings or flat objects)

Expand All @@ -154,36 +154,36 @@ Different LLMs have different strengths when working with structured data. Based
You can provide your API key in three ways:

1. Directly in the options:

```typescript
const response = await callAi('Hello', { apiKey: 'your-api-key' });
const response = await callAi("Hello", { apiKey: "your-api-key" });
```

2. Set globally in the browser:

```typescript
window.CALLAI_API_KEY = 'your-api-key';
const response = await callAi('Hello');
window.CALLAI_API_KEY = "your-api-key";
const response = await callAi("Hello");
```

3. Use environment variables in Node.js (with a custom implementation):

```typescript
// Example of environment variable integration
import { callAi } from 'call-ai';
import { callAi } from "call-ai";
const apiKey = process.env.OPENAI_API_KEY || process.env.OPENROUTER_API_KEY;
const response = await callAi('Hello', { apiKey });
const response = await callAi("Hello", { apiKey });
```

## API

```typescript
// Main function
function callAi(
prompt: string | Message[],
options?: CallAIOptions
): Promise<string> | AsyncGenerator<string, string, unknown>
function callAi(prompt: string | Message[], options?: CallAIOptions): Promise<string> | AsyncGenerator<string, string, unknown>;

// Types
type Message = {
role: 'user' | 'system' | 'assistant';
role: "user" | "system" | "assistant";
content: string;
};

Expand All @@ -209,12 +209,12 @@ interface CallAIOptions {

### Options

* `apiKey`: Your API key (can also be set via window.CALLAI_API_KEY)
* `model`: Model identifier (default: 'openrouter/auto')
* `endpoint`: API endpoint (default: 'https://openrouter.ai/api/v1/chat/completions')
* `stream`: Enable streaming responses (default: false)
* `schema`: Optional JSON schema for structured output
* Any other options are passed directly to the API (temperature, max_tokens, etc.)
- `apiKey`: Your API key (can also be set via window.CALLAI_API_KEY)
- `model`: Model identifier (default: 'openrouter/auto')
- `endpoint`: API endpoint (default: 'https://openrouter.ai/api/v1/chat/completions')
- `stream`: Enable streaming responses (default: false)
- `schema`: Optional JSON schema for structured output
- Any other options are passed directly to the API (temperature, max_tokens, etc.)

## License

Expand Down Expand Up @@ -251,12 +251,14 @@ This library uses GitHub Actions to automate the release process:
5. Push changes and tag: `git push origin main vX.Y.Z`

The GitHub workflow in `.github/workflows/publish.yml` will:

- Automatically trigger when a new tag is pushed
- Run tests and type checking
- Verify the tag signature
- Publish the package to npm

When making significant changes, remember to:

- Document breaking changes in the changelog
- Update documentation to reflect API changes
- Update TypeScript types
- Update TypeScript types
Loading
Loading