Skip to content

Conversation

@soyuka
Copy link

@soyuka soyuka commented Oct 16, 2025

suggestion for #652

Basically creates a tool for the llm to retrieve resources:

20251016_11h49m19s_grim

Also interesting: https://github.com/openai/codex/pull/5239/files

@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. enhancement New feature or request labels Oct 16, 2025
@matteo8p
Copy link
Collaborator

@soyuka This is an awesome approach! I would love to have a pre-installed tool to get resources. I have a couple of thoughts on your approach:

  • I like how you made additions to MCPClientManager. This is the foundation of MCPJam. I like the new helpers you added. I do think we can change the function names. We already have listResource for single server. Maybe we can replace that function to allow multiple serverIds passed in, or rename your function to listResourcesForMultipleServers to be more explicit
  • I don't think we need a resources embedding model. The model requires the user to provide either a openai, deepseek, or gemini API key otherwise this feature wouldn't work. Instead of an embedding model, I think we can just pass in the URI's of the resources in the system prompt. The model should be able to call getResource with the right URI with the URI's in its system prompt.
  • In addition, we provide free models via cloud too, so we'll have to make some modifications there too, but we'll take care of that.

This also makes me think whether or not we should add other default tools like web search too, to simulate real LLM environments like Claude Desktop.

</TooltipProvider>
);
}

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you don't add empty lines usually right? my ide does that I should probably rolbacm

}
return allResourceTemplates;
}

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is merely a prototype for now I still want to improve the usage but at least it allows me to test resources

@soyuka
Copy link
Author

soyuka commented Oct 17, 2025

I don't think we need a resources embedding model. The model requires the user to provide either a openai, deepseek, or gemini API key otherwise this feature wouldn't work. Instead of an embedding model, I think we can just pass in the URI's of the resources in the system prompt. The model should be able to call getResource with the right URI with the URI's in its system prompt.

Agreed that this adds quite some complexity, but I felt it was the correct way of handling resources ideally. What about having both implementation, if the configured model doesn't support I can fallback on just giving the resources to the system prompt?

I'm also wondering how I could cache the resource calls as for now it is calling the tool though the client already has the whole resource. Another thing that bothers me in the modelcontextprotocol didn't quite figure the best way of paginating resources (modelcontextprotocol/modelcontextprotocol#799) I may have some ideas but for now I'll leave that aside.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request size:L This PR changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants