Fix: Moderations endpoint now respects api_base configuration parameter
          #16087
        
          
      
      
        
          +75
        
        
          −14
        
        
          
        
      
    
  
  Add this suggestion to a batch that can be applied as a single commit.
  This suggestion is invalid because no changes were made to the code.
  Suggestions cannot be applied while the pull request is closed.
  Suggestions cannot be applied while viewing a subset of changes.
  Only one suggestion per line can be applied in a batch.
  Add this suggestion to a batch that can be applied as a single commit.
  Applying suggestions on deleted lines is not supported.
  You must change the existing code in this line in order to create a valid suggestion.
  Outdated suggestions cannot be applied.
  This suggestion has been applied or marked resolved.
  Suggestions cannot be applied from pending reviews.
  Suggestions cannot be applied on multi-line comments.
  Suggestions cannot be applied while the pull request is queued to merge.
  Suggestion cannot be applied right now. Please check back later.
  
    
  
    
Title
Fix: Moderations endpoint now respects
api_baseconfiguration parameterRelevant issues
Fixes LIT-1385
Pre-Submission checklist
Please complete all items before asking a LiteLLM maintainer to review your PR
I have Added testing in the
tests/litellm/directory, Adding at least 1 test is a hard requirement - see detailsI have added a screenshot of my new test passing locally
My PR passes all unit tests on
make test-unitMy PR's scope is as isolated as possible, it only solves 1 specific problem
Type
🐛 Bug Fix
Changes
Problem
The Moderations endpoint (
/v1/moderations) was not respecting theapi_baseconfiguration parameter. When making requests with a customapi_basein the config (e.g., for OpenAI Data Residency usinghttps://us.api.openai.com), requests were still being sent to the default OpenAI endpoint.Root Cause
Both the synchronous
moderation()and asynchronousamoderation()functions inlitellm/main.pywere not passing theapi_baseparameter when creating the OpenAI client.Solution
1. Fixed
amoderation()(async function) - Line 5335-5366optional_paramsextraction andget_llm_provider()call before creating the OpenAI clientapi_base=optional_params.api_base or _dynamic_api_baseparameter when calling_get_openai_client()2. Fixed
moderation()(sync function) - Line 5290-5319api_basefrom kwargsbase_urlparameter to OpenAI client initialization whenapi_baseis provided3. Added regression test -
test_moderation_endpoint_with_api_base()tests/router_unit_tests/test_router_endpoints.pyapi_baseis correctly passed to the OpenAI clientIt shows it is trying to use http://host.docker.internal:8080/
