-
Notifications
You must be signed in to change notification settings - Fork 106
Description
While working with the Microsoft Learn MCP server, I encountered several issues related to its response format, parameter configuration, installation methods, and contextual relevance in outputs. This issue outlines the problems and suggests improvements.
🐞 Bug Reports
1. Inconsistent Response Format
Expected:
result.content = [
TextContent,
TextContent,
TextContent
]
Actual:
result.content = [
TextContent,
Text = [
TextContent,
TextContent,
TextContent
]
]
The nested Text = [...]
structure seems unexpected and breaks consistency with how content is processed downstream in tools.
2. Configuration Parameter Limitation
Currently, the server appears to accept only the query
parameter. However, other tools and agents often use question
, which is not recognized in the current implementation.
🧪 Reproduction Example:
See this sample I created that utilizes question
:
🔗 Scenario Sample (Python)
💡 Feature Requests
3. Installation Support with uvx/npx or MCP-native CLI
There's currently no guidance or working setup to:
- Install the MCP server via
uvx
,npx
, or package manager. - Deploy and run it as a standalone MCP-compatible tool instead of just through API calls or SSE streams.
A CLI-based or containerized deployment option would make integration far easier.
4. Improve Content Relevancy
Referencing Issue #7, there’s a need to improve retrieval accuracy and context-aware ranking. A more agentic RAG (retrieval-augmented generation) approach would be beneficial here.
5. Agent-Aware Context Fetching
As seen in PR #23, using the MCP tool currently requires the agent to be manually instructed to fetch documents. Ideally, agents should autonomously determine when and what to fetch based on conversation flow.
6. Expand Source Search to TechCommunity and DevBlogs
Request:
Can you integrate additional content sources like:
This would enable richer and more diversified documentation responses.