Skip to content

Conversation

@markpollack
Copy link
Member

Summary

Add support for Ollama's thinking mode, which enables reasoning-capable models to emit their internal reasoning process in a separate field before providing the final answer.

Key Changes

  • Implement ThinkOption sealed interface with boolean and level variants to support different model requirements
  • Add think configuration to OllamaChatOptions with builder methods (enableThinking(), disableThinking())
  • Filter think from options map to ensure it's sent as a top-level ChatRequest field, not within the options map
  • Add QWEN3_4B_THINKING model constant for the thinking-enabled variant of Qwen3 4B
  • Upgrade Ollama test container to 0.12.10 to enable testing of thinking mode features
  • Document auto-enable behavior for thinking-capable models in both code and documentation

Supported Models

  • Qwen3 (qwen3:*-thinking variants)
  • DeepSeek-v3.1
  • DeepSeek R1
  • GPT-OSS (supports thinking levels: "low", "medium", "high")

Important Notes

Default Behavior (Ollama 0.12+): Thinking-capable models (such as qwen3:*-thinking, deepseek-r1, deepseek-v3.1) auto-enable thinking by default when the think option is not explicitly set. Standard models (such as qwen2.5:*, llama3.2) do not enable thinking by default.

To explicitly control this behavior, use .enableThinking() or .disableThinking() builder methods.

Testing

All existing Ollama tests pass with the updated test container version, and new tests verify the thinking mode functionality.

Add support for Ollama's thinking mode, which enables reasoning-capable
models to emit their internal reasoning process in a separate field.

Key changes:
- Implement ThinkOption sealed interface with boolean and level variants
- Add think configuration to OllamaChatOptions with builder methods
- Filter think from options map to send as top-level request field
- Add QWEN3_4B_THINKING model constant for thinking-enabled variant
- Upgrade Ollama test container to 0.12.10 for thinking support
- Document auto-enable behavior for thinking-capable models

Supported models: Qwen3, DeepSeek-v3.1, DeepSeek R1, GPT-OSS.

Note: Thinking-capable models auto-enable thinking by default in
Ollama 0.12+. Use .disableThinking() to explicitly disable.

Signed-off-by: Mark Pollack <mark.pollack@broadcom.com>
@markpollack markpollack added this to the 1.1.0.RC1 milestone Nov 7, 2025
@liugddx
Copy link
Contributor

liugddx commented Nov 7, 2025

#3386

@ilayaperumalg
Copy link
Member

@sunyuhan1998 @liugddx yes, this is a refined/re-worked PR for Ollama's thinking mode support. Requesting review for the same from this PR.

Comment on lines +146 to +150
public ThinkLevel {
if (level != null && !List.of("low", "medium", "high").contains(level)) {
throw new IllegalArgumentException("think level must be 'low', 'medium', or 'high', got: " + level);
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
public ThinkLevel {
if (level != null && !List.of("low", "medium", "high").contains(level)) {
throw new IllegalArgumentException("think level must be 'low', 'medium', or 'high', got: " + level);
}
}
private static final List<String> VALID_LEVELS = List.of("low", "medium", "high");
public ThinkLevel {
if (level != null && !VALID_LEVELS.contains(level)) {
throw new IllegalArgumentException(
"think level must be one of " + VALID_LEVELS + ", got: " + level
);
}
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@liugddx Thanks for the feedback.

@sunyuhan1998
Copy link
Contributor

@sunyuhan1998 @liugddx yes, this is a refined/re-worked PR for Ollama's thinking mode support. Requesting review for the same from this PR.

Yes, sorry I've been too busy recently to address some of the issues in #3386. The current PR appears to be much more comprehensive than #3386, and many users are eagerly awaiting support for this capability. Looking forward to its swift merge into the main branch! Thanks to @markpollack @ilayaperumalg

@ilayaperumalg
Copy link
Member

@sunyuhan1998 no problem, thanks for the quick response and all the contributions! We'll take care of merging this into 1.1.0-RC

@ilayaperumalg
Copy link
Member

Rebased and merged as 0b8293e

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants