Skip to content

Conversation

@Sameerlite
Copy link
Collaborator

@Sameerlite Sameerlite commented Oct 30, 2025

Title

Fix: Moderations endpoint now respects api_base configuration parameter

Relevant issues

Fixes LIT-1385

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details

  • I have added a screenshot of my new test passing locally

  • My PR passes all unit tests on make test-unit

  • My PR's scope is as isolated as possible, it only solves 1 specific problem

Type

🐛 Bug Fix

Changes

Problem

The Moderations endpoint (/v1/moderations) was not respecting the api_base configuration parameter. When making requests with a custom api_base in the config (e.g., for OpenAI Data Residency using https://us.api.openai.com), requests were still being sent to the default OpenAI endpoint.

Root Cause

Both the synchronous moderation() and asynchronous amoderation() functions in litellm/main.py were not passing the api_base parameter when creating the OpenAI client.

Solution

1. Fixed amoderation() (async function) - Line 5335-5366

  • Moved optional_params extraction and get_llm_provider() call before creating the OpenAI client
  • Added api_base=optional_params.api_base or _dynamic_api_base parameter when calling _get_openai_client()

2. Fixed moderation() (sync function) - Line 5290-5319

  • Extracted api_base from kwargs
  • Conditionally added base_url parameter to OpenAI client initialization when api_base is provided

3. Added regression test - test_moderation_endpoint_with_api_base()

  • Location: tests/router_unit_tests/test_router_endpoints.py
  • Verifies that custom api_base is correctly passed to the OpenAI client
  • Uses mocking to ensure the parameter flows through correctly
- model_name: "*"
    litellm_params:
      model: "openai/*"
      api_base: http://host.docker.internal:8080/
      api_key: dummy
  - model_name: "openai/*"
    litellm_params:
      model: "openai/*"
      api_base: http://host.docker.internal:8080/
      api_key: dummy
curl http://localhost:4000/v1/moderations \  
 -X POST \
 -H "Content-Type: application/json" \
 -H "Authorization: Bearer $API_KEY" \
 -d '{
  "model": "openai/omni-moderation-latest",
  "input": "...text to classify goes here..."
 }'

It shows it is trying to use http://host.docker.internal:8080/
image

@vercel
Copy link

vercel bot commented Oct 30, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
litellm Ready Ready Preview Comment Oct 30, 2025 7:53am

Copy link
Contributor

@ishaan-jaff ishaan-jaff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ishaan-jaff ishaan-jaff merged commit eed3ad0 into main Oct 30, 2025
31 of 53 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants