-
Notifications
You must be signed in to change notification settings - Fork 139
implement resource embedding #657
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
@soyuka This is an awesome approach! I would love to have a pre-installed tool to get resources. I have a couple of thoughts on your approach:
This also makes me think whether or not we should add other default tools like web search too, to simulate real LLM environments like Claude Desktop. |
| </TooltipProvider> | ||
| ); | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you don't add empty lines usually right? my ide does that I should probably rolbacm
| } | ||
| return allResourceTemplates; | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is merely a prototype for now I still want to improve the usage but at least it allows me to test resources
Agreed that this adds quite some complexity, but I felt it was the correct way of handling resources ideally. What about having both implementation, if the configured model doesn't support I can fallback on just giving the resources to the system prompt? I'm also wondering how I could cache the resource calls as for now it is calling the tool though the client already has the whole resource. Another thing that bothers me in the modelcontextprotocol didn't quite figure the best way of paginating resources (modelcontextprotocol/modelcontextprotocol#799) I may have some ideas but for now I'll leave that aside. |
suggestion for #652
Basically creates a tool for the llm to retrieve resources:
Also interesting: https://github.com/openai/codex/pull/5239/files