Skip to content

Conversation

@ericcurtin
Copy link
Contributor

On Linux at least

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ericcurtin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces an automatic detection mechanism for supported AMD GPUs on Linux. Upon detecting a compatible AMD GPU, the system will automatically configure the llama.cpp backend to utilize the ROCm variant, enhancing performance for users with AMD hardware without requiring manual intervention. This change streamlines the setup process and ensures optimal GPU utilization where available.

Highlights

  • Automatic ROCm Detection: Implemented logic to automatically detect supported AMD GPUs on Linux systems by inspecting KFD topology and gfx_target_version identifiers.
  • Dynamic Backend Selection: The llama.cpp backend will now dynamically select and download the appropriate ROCm variant if a compatible AMD GPU is detected on Linux, streamlining setup for users.
  • Platform-Specific Implementation: The AMD GPU detection logic is specifically implemented for Linux, with placeholder functions for other operating systems to ensure cross-platform compatibility without unnecessary checks.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes - here's some feedback:

  • The environment-based GPU autodetection in init() can be surprising; consider moving that logic into an explicit configuration method or flag instead of a package init side-effect.
  • The GPU check in main.go duplicates the detection later in the backend; consider removing this duplicate scan to avoid redundant file system accesses.
  • Scanning /sys/class/kfd on every download invocation may impact performance; consider caching the detected GPU presence per process.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The environment-based GPU autodetection in init() can be surprising; consider moving that logic into an explicit configuration method or flag instead of a package init side-effect.
- The GPU check in main.go duplicates the detection later in the backend; consider removing this duplicate scan to avoid redundant file system accesses.
- Scanning /sys/class/kfd on every download invocation may impact performance; consider caching the detected GPU presence per process.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds automatic detection of AMD GPUs to use the ROCm variant on Linux. The changes are well-intentioned, but there are several critical issues that will prevent the code from building due to function and method redeclarations across different build tags. I've provided comments and suggestions to resolve these build-breaking issues. Additionally, the core GPU detection logic contains a critical bug that could lead to a panic, and there are several opportunities for improving code quality, readability, and performance, for which I've also left suggestions.

@ericcurtin ericcurtin force-pushed the auto-rocm branch 3 times, most recently from 3d3badd to b7e9f39 Compare October 28, 2025 00:15
On Linux at least

Signed-off-by: Eric Curtin <eric.curtin@docker.com>
Copy link
Contributor

@doringeman doringeman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This means docker model install-runner --gpu none won't have effect anymore, right? Because we'll automatically update to the ROCm variant if it's supported.

ShouldUseGPUVariantLock.Lock()
defer ShouldUseGPUVariantLock.Unlock()
ShouldUseGPUVariant = true
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd rather have ShouldUseGPUVariant = runtime.GOOS == "linux" in download.go instead of an init.
For more context, this variable is used by Docker Desktop to determine whether it should display the GPU backed engine checkbox in the UI.

func (g *GPUInfo) HasSupportedAMDGPU() (bool, error) {
// AMD GPU detection is only supported on Linux
return false, nil
} No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
}
}

@@ -0,0 +1,3 @@
//go:build linux

package gpuinfo No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
package gpuinfo
package gpuinfo

}

return false, nil // No supported AMD GPU found
} No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
}
}

) error {
l.status = fmt.Sprintf("running llama.cpp version: %s",
getLlamaCppVersion(log, filepath.Join(vendoredServerStoragePath, "com.docker.llama-server")))
return errLlamaCppUpdateDisabled
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it'd be better to stick to a consistent logic for picking whether we use acceleration or not. Currently on LInux we expect people to run the model-runner container (appropriate to their usecase and system), so the accelerate/not-accelerate question should be decided in the CLI rather than here.

@yeahdongcn
Copy link
Contributor

Would you like me to add logic to auto-detect musa as well?

@ericcurtin
Copy link
Contributor Author

Would you like me to add logic to auto-detect musa as well?

Sure, see what we did for Nvidia. But only if it's not crazy or unreliable. Ideally we should be able to probe /proc, /sys or /dev to figure this out (I know that is not always possible though)

@yeahdongcn
Copy link
Contributor

@ericcurtin Please see: #342

I will rebase main once your PR is merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants