- 
                Notifications
    You must be signed in to change notification settings 
- Fork 46
Automatically use ROCm when appropriate #328
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
| Summary of ChangesHello @ericcurtin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces an automatic detection mechanism for supported AMD GPUs on Linux. Upon detecting a compatible AMD GPU, the system will automatically configure the  Highlights
 Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either  
 Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a  Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
 | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey there - I've reviewed your changes - here's some feedback:
- The environment-based GPU autodetection in init() can be surprising; consider moving that logic into an explicit configuration method or flag instead of a package init side-effect.
- The GPU check in main.go duplicates the detection later in the backend; consider removing this duplicate scan to avoid redundant file system accesses.
- Scanning /sys/class/kfd on every download invocation may impact performance; consider caching the detected GPU presence per process.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The environment-based GPU autodetection in init() can be surprising; consider moving that logic into an explicit configuration method or flag instead of a package init side-effect.
- The GPU check in main.go duplicates the detection later in the backend; consider removing this duplicate scan to avoid redundant file system accesses.
- Scanning /sys/class/kfd on every download invocation may impact performance; consider caching the detected GPU presence per process.Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds automatic detection of AMD GPUs to use the ROCm variant on Linux. The changes are well-intentioned, but there are several critical issues that will prevent the code from building due to function and method redeclarations across different build tags. I've provided comments and suggestions to resolve these build-breaking issues. Additionally, the core GPU detection logic contains a critical bug that could lead to a panic, and there are several opportunities for improving code quality, readability, and performance, for which I've also left suggestions.
3d3badd    to
    b7e9f39      
    Compare
  
    On Linux at least Signed-off-by: Eric Curtin <eric.curtin@docker.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This means docker model install-runner --gpu none won't have effect anymore, right? Because we'll automatically update to the ROCm variant if it's supported.
| ShouldUseGPUVariantLock.Lock() | ||
| defer ShouldUseGPUVariantLock.Unlock() | ||
| ShouldUseGPUVariant = true | ||
| } | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd rather have ShouldUseGPUVariant       = runtime.GOOS == "linux" in download.go instead of an init.
For more context, this variable is used by Docker Desktop to determine whether it should display the GPU backed engine checkbox in the UI.
| func (g *GPUInfo) HasSupportedAMDGPU() (bool, error) { | ||
| // AMD GPU detection is only supported on Linux | ||
| return false, nil | ||
| } No newline at end of file | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| } | |
| } | |
| @@ -0,0 +1,3 @@ | |||
| //go:build linux | |||
|  | |||
| package gpuinfo No newline at end of file | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| package gpuinfo | |
| package gpuinfo | |
| } | ||
|  | ||
| return false, nil // No supported AMD GPU found | ||
| } No newline at end of file | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| } | |
| } | |
| ) error { | ||
| l.status = fmt.Sprintf("running llama.cpp version: %s", | ||
| getLlamaCppVersion(log, filepath.Join(vendoredServerStoragePath, "com.docker.llama-server"))) | ||
| return errLlamaCppUpdateDisabled | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it'd be better to stick to a consistent logic for picking whether we use acceleration or not. Currently on LInux we expect people to run the model-runner container (appropriate to their usecase and system), so the accelerate/not-accelerate question should be decided in the CLI rather than here.
| Would you like me to add logic to auto-detect  | 
| 
 Sure, see what we did for Nvidia. But only if it's not crazy or unreliable. Ideally we should be able to probe /proc, /sys or /dev to figure this out (I know that is not always possible though) | 
| @ericcurtin Please see: #342 I will rebase main once your PR is merged. | 
On Linux at least