-
Notifications
You must be signed in to change notification settings - Fork 2k
feat(ollama): add thinking mode support for reasoning models #4821
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Add support for Ollama's thinking mode, which enables reasoning-capable models to emit their internal reasoning process in a separate field. Key changes: - Implement ThinkOption sealed interface with boolean and level variants - Add think configuration to OllamaChatOptions with builder methods - Filter think from options map to send as top-level request field - Add QWEN3_4B_THINKING model constant for thinking-enabled variant - Upgrade Ollama test container to 0.12.10 for thinking support - Document auto-enable behavior for thinking-capable models Supported models: Qwen3, DeepSeek-v3.1, DeepSeek R1, GPT-OSS. Note: Thinking-capable models auto-enable thinking by default in Ollama 0.12+. Use .disableThinking() to explicitly disable. Signed-off-by: Mark Pollack <mark.pollack@broadcom.com>
|
@sunyuhan1998 @liugddx yes, this is a refined/re-worked PR for Ollama's thinking mode support. Requesting review for the same from this PR. |
| public ThinkLevel { | ||
| if (level != null && !List.of("low", "medium", "high").contains(level)) { | ||
| throw new IllegalArgumentException("think level must be 'low', 'medium', or 'high', got: " + level); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| public ThinkLevel { | |
| if (level != null && !List.of("low", "medium", "high").contains(level)) { | |
| throw new IllegalArgumentException("think level must be 'low', 'medium', or 'high', got: " + level); | |
| } | |
| } | |
| private static final List<String> VALID_LEVELS = List.of("low", "medium", "high"); | |
| public ThinkLevel { | |
| if (level != null && !VALID_LEVELS.contains(level)) { | |
| throw new IllegalArgumentException( | |
| "think level must be one of " + VALID_LEVELS + ", got: " + level | |
| ); | |
| } | |
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@liugddx Thanks for the feedback.
Yes, sorry I've been too busy recently to address some of the issues in #3386. The current PR appears to be much more comprehensive than #3386, and many users are eagerly awaiting support for this capability. Looking forward to its swift merge into the main branch! Thanks to @markpollack @ilayaperumalg |
|
@sunyuhan1998 no problem, thanks for the quick response and all the contributions! We'll take care of merging this into 1.1.0-RC |
|
Rebased and merged as 0b8293e |
Summary
Add support for Ollama's thinking mode, which enables reasoning-capable models to emit their internal reasoning process in a separate field before providing the final answer.
Key Changes
enableThinking(),disableThinking())Supported Models
Important Notes
Default Behavior (Ollama 0.12+): Thinking-capable models (such as
qwen3:*-thinking,deepseek-r1,deepseek-v3.1) auto-enable thinking by default when the think option is not explicitly set. Standard models (such asqwen2.5:*,llama3.2) do not enable thinking by default.To explicitly control this behavior, use
.enableThinking()or.disableThinking()builder methods.Testing
All existing Ollama tests pass with the updated test container version, and new tests verify the thinking mode functionality.