Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

[None][test] add deepseek RCCA perf test case#11736

Merged
ruodil merged 1 commit intoNVIDIA:mainNVIDIA/TensorRT-LLM:mainfrom
ruodil:ruodil/disaggruodil/TensorRT-LLM:ruodil/disaggCopy head branch name to clipboard
Mar 3, 2026
Merged

[None][test] add deepseek RCCA perf test case#11736
ruodil merged 1 commit intoNVIDIA:mainNVIDIA/TensorRT-LLM:mainfrom
ruodil:ruodil/disaggruodil/TensorRT-LLM:ruodil/disaggCopy head branch name to clipboard

Conversation

@ruodil
Copy link
Copy Markdown
Collaborator

@ruodil ruodil commented Feb 26, 2026

Summary by CodeRabbit

Release Notes

  • Tests
    • Added performance benchmarking configuration for Deepseek R1 NVFP4 model variant with chunked prefill and FP8 KV cache optimizations, supporting batch sizes up to 32 and sequence lengths up to 81,920.
    • Added corresponding test cases to the quality assurance test suite.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

Signed-off-by: Ruodi Lu <ruodil@users.noreply.github.com>
@ruodil ruodil self-assigned this Feb 26, 2026
@ruodil ruodil requested review from a team as code owners February 26, 2026 05:57
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Feb 26, 2026

📝 Walkthrough

Walkthrough

The changes add a new performance test configuration for a Deepseek R1 NVFP4 model with chunked prefill and FP8 KV cache, including model configuration parameters and corresponding benchmark test entry.

Changes

Cohort / File(s) Summary
NVFP4 Model Configuration
tests/integration/defs/perf/pytorch_model_config.py
Adds pattern_config for Deepseek R1 NVFP4 model with chunked prefill enabled, FP8 KV cache, max_num_tokens 4096, max_batch_size 32, max_seq_len 81920, and cuda_graph_config settings.
Performance Benchmark Test
tests/integration/test_lists/qa/llm_perf_core.yml
Adds test entry for nvfp4 configuration with chunked prefill using deepseek_r1_nvfp4-bench-pytorch-float4-maxbs with maxbs 32, maxnt 4096, kv_frac 0.80, and 3000 requests across 2 epochs.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Title check ⚠️ Warning The PR title mentions 'deepseek RCCA perf test case' but the actual changes add a Deepseek R1 NVFP4 model config with chunked prefill, not a generic RCCA test case. The title is misleading about the specific content. Revise the title to specifically mention NVFP4 and chunked prefill, e.g., '[None][test] add Deepseek R1 NVFP4 chunked prefill perf test case'.
Description check ⚠️ Warning The PR description is completely empty except for the template placeholder comments. All required sections (Description, Test Coverage, PR Checklist items) are unfilled. Fill in the Description and Test Coverage sections explaining the NVFP4 config addition, and complete the PR Checklist by addressing each item listed.
✅ Passed checks (1 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
tests/integration/test_lists/qa/llm_perf_core.yml (1)

259-260: Consider adding a TIMEOUT annotation.

The test has reqs:3000 which is substantial. Similar DeepSeek tests with comparable request counts (e.g., lines 257-258 with reqs:3000) include TIMEOUT(120). While the input_output_len:8192,512 profile (long input, short output) may differ from 1000,2000, you may want to verify if a timeout is needed to prevent test hangs.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/integration/test_lists/qa/llm_perf_core.yml` around lines 259 - 260,
Add a TIMEOUT annotation to the long-running DeepSeek test entry to prevent
hangs: for the test identifier string
perf/test_perf.py::test_perf[deepseek_r1_nvfp4-bench-pytorch-float4-maxbs:32-maxnt:4096-kv_frac:0.80-input_output_len:8192,512-reqs:3000-ep:2-tp:4-gpus:4]
insert the same TIMEOUT(120) annotation used by nearby DeepSeek tests (e.g., the
entries around lines 257–258) so the test will fail fast if it exceeds the
expected runtime.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tests/integration/test_lists/qa/llm_perf_core.yml`:
- Around line 259-260: Add a TIMEOUT annotation to the long-running DeepSeek
test entry to prevent hangs: for the test identifier string
perf/test_perf.py::test_perf[deepseek_r1_nvfp4-bench-pytorch-float4-maxbs:32-maxnt:4096-kv_frac:0.80-input_output_len:8192,512-reqs:3000-ep:2-tp:4-gpus:4]
insert the same TIMEOUT(120) annotation used by nearby DeepSeek tests (e.g., the
entries around lines 257–258) so the test will fail fast if it exceeds the
expected runtime.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1754dcc and fb6d76a.

📒 Files selected for processing (2)
  • tests/integration/defs/perf/pytorch_model_config.py
  • tests/integration/test_lists/qa/llm_perf_core.yml

@ruodil ruodil changed the title [RCCA][test] add deepseek RCCA perf test case [None][test] add deepseek RCCA perf test case Feb 26, 2026
@ruodil
Copy link
Copy Markdown
Collaborator Author

ruodil commented Feb 26, 2026

/bot skip --comment "skip test as just adding test cases"

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #36884 [ skip ] triggered by Bot. Commit: fb6d76a Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #36884 [ skip ] completed with state SUCCESS. Commit: fb6d76a
Skipping testing for commit fb6d76a

Link to invocation

@ruodil ruodil merged commit 8553560 into NVIDIA:main Mar 3, 2026
7 of 11 checks passed
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Mar 9, 2026
Signed-off-by: Ruodi Lu <ruodil@users.noreply.github.com>
Co-authored-by: Ruodi Lu <ruodil@users.noreply.github.com>
tianyuz-nv pushed a commit to wanqian-nv/TensorRT-LLM that referenced this pull request Mar 19, 2026
Signed-off-by: Ruodi Lu <ruodil@users.noreply.github.com>
Co-authored-by: Ruodi Lu <ruodil@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants

Morty Proxy This is a proxified and sanitized view of the page, visit original site.