Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Comments

Close side panel

⚡️ Speed up method PromptManager.list_prompts by 84%#1

Open
codeflash-ai[bot] wants to merge 1 commit intomainSaga4/python-sdk:mainfrom
codeflash/optimize-PromptManager.list_prompts-ma2wg3klSaga4/python-sdk:codeflash/optimize-PromptManager.list_prompts-ma2wg3klCopy head branch name to clipboard
Open

⚡️ Speed up method PromptManager.list_prompts by 84%#1
codeflash-ai[bot] wants to merge 1 commit intomainSaga4/python-sdk:mainfrom
codeflash/optimize-PromptManager.list_prompts-ma2wg3klSaga4/python-sdk:codeflash/optimize-PromptManager.list_prompts-ma2wg3klCopy head branch name to clipboard

Conversation

@codeflash-ai
Copy link

@codeflash-ai codeflash-ai bot commented Apr 29, 2025

📄 84% (0.84x) speedup for PromptManager.list_prompts in src/mcp/server/fastmcp/prompts/prompt_manager.py

⏱️ Runtime : 4.09 microseconds 2.23 microseconds (best of 21 runs)

📝 Explanation and details

Here is an optimized version of your code.
Since your only function (list_prompts) just calls list(self._prompts.values()), and the profiling shows this is the main runtime, you can avoid creating a new list by returning a view (if the caller never mutates or sorts the result), or a tuple (with tuple(self._prompts.values())), which is slightly faster and uses less memory than a list. Document that this is a tuple now if you return it (the type annotation must be changed only if the function's return type changes).
However, if you must keep the return type as list, this is basically optimal.
To further optimize, you can return a statically cached empty list (avoid list allocation) if there are no items. This gives a minor speed boost in empty cases.

Here's the refactored code.

Explanation:

  • Checks for an empty dict to avoid allocating a new list in the common case of zero prompts.
  • Retains list conversion for caller compatibility.
  • Keeps fast native dict values() access.

If mutation of the returned list by the caller is never required and you control call sites, you could.

  • Change return type to tuple[Prompt, ...]
  • return tuple(self._prompts.values())
    This is slightly faster and smaller in memory, but not backward compatible.

No other meaningful optimizations can be made to this function given Python's built-in dict and memory model!

Correctness verification report:

Test Status
⚙️ Existing Unit Tests 🔘 None Found
🌀 Generated Regression Tests 4 Passed
⏪ Replay Tests 🔘 None Found
🔎 Concolic Coverage Tests 2 Passed
📊 Tests Coverage 100.0%
🌀 Generated Regression Tests Details
import pytest  # used for our unit tests
# function to test
from mcp.server.fastmcp.prompts.base import Prompt
from src.mcp.server.fastmcp.prompts.prompt_manager import PromptManager


# Mock Prompt class for testing
class MockPrompt(Prompt):
    def __init__(self, name):
        self.name = name

# unit tests
def test_empty_prompt_list():
    """Test when no prompts have been added."""
    manager = PromptManager()
    codeflash_output = manager.list_prompts()








import pytest  # used for our unit tests
# function to test
from mcp.server.fastmcp.prompts.base import Prompt
from src.mcp.server.fastmcp.prompts.prompt_manager import PromptManager


# Mock Prompt class for testing purposes
class MockPrompt(Prompt):
    def __init__(self, name):
        self.name = name

# unit tests

def test_empty_prompt_list():
    """Test the list_prompts function with no prompts added."""
    manager = PromptManager()
    codeflash_output = manager.list_prompts()







from src.mcp.server.fastmcp.prompts.prompt_manager import PromptManager

def test_PromptManager_list_prompts():
    PromptManager.list_prompts(PromptManager(warn_on_duplicate_prompts=False))

To edit these changes git checkout codeflash/optimize-PromptManager.list_prompts-ma2wg3kl and push.

Codeflash

Here is an optimized version of your code.  
Since your only function (`list_prompts`) just calls `list(self._prompts.values())`, and the profiling shows this is the main runtime, you can avoid creating a new list by returning a view (if the caller never mutates or sorts the result), or a tuple (with `tuple(self._prompts.values())`), which is slightly faster and uses less memory than a list. Document that this is a tuple now if you return it (the type annotation must be changed only if the function's return type changes).  
However, if you must keep the return type as `list`, this is basically optimal.  
To further optimize, you can return a statically cached empty list (avoid list allocation) if there are no items. This gives a minor speed boost in empty cases.

Here's the refactored code.

**Explanation:**  
- Checks for an empty dict to avoid allocating a new list in the common case of zero prompts.
- Retains list conversion for caller compatibility.
- Keeps fast native dict `values()` access.

If mutation of the returned list by the caller is never required and you control call sites, you *could*.
- Change return type to `tuple[Prompt, ...]`
- `return tuple(self._prompts.values())`  
This is slightly faster and smaller in memory, but not backward compatible.

No other meaningful optimizations can be made to this function given Python's built-in dict and memory model!
@codeflash-ai codeflash-ai bot added the ⚡️ codeflash Optimization PR opened by Codeflash AI label Apr 29, 2025
@codeflash-ai codeflash-ai bot requested a review from Saga4 April 29, 2025 19:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

⚡️ codeflash Optimization PR opened by Codeflash AI

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants

Morty Proxy This is a proxified and sanitized view of the page, visit original site.