[https://nvbugs/6084445][fix] use DEEPGEMM for DeepSeek-V3-Lite fp8 chunked prefill on SM100/SM103#13257
[https://nvbugs/6084445][fix] use DEEPGEMM for DeepSeek-V3-Lite fp8 chunked prefill on SM100/SM103#13257kaiyux merged 1 commit intoNVIDIA:mainNVIDIA/TensorRT-LLM:mainfrom jmydurant:fix/nvbug_6084445jmydurant/TensorRT-LLM:fix/nvbug_6084445Copy head branch name to clipboard
Conversation
|
/bot help |
GitHub Bot Help
Provide a user friendly way for developers to interact with a Jenkins server. Run See details below for each supported subcommand. Details
Launch build/test pipelines. All previously running jobs will be killed.
kill
Kill all running builds associated with pull request. skip
Skip testing for latest commit on pull request. reuse-pipeline
Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break. |
📝 WalkthroughWalkthroughThe changes update PyTorch test configurations to conditionally enable MoE backend settings for fp8 quantization on SM100f GPUs, register the updated test in the test database, and remove a corresponding test skip waiver to enable execution. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
|
/bot run --disable-fail-fast |
|
PR_Github #44615 [ run ] triggered by Bot. Commit: |
ab5dc34 to
56bbff2
Compare
|
PR_Github #44615 [ run ] completed with state
|
|
/bot run --disable-fail-fast |
|
PR_Github #44741 [ run ] triggered by Bot. Commit: |
|
PR_Github #44741 [ run ] completed with state
|
Signed-off-by: Mingyang Jiang <13463932+jmydurant@users.noreply.github.com>
56bbff2 to
e9dead4
Compare
|
/bot run --disable-fail-fast |
|
PR_Github #44888 [ run ] triggered by Bot. Commit: |
|
PR_Github #44888 [ run ] completed with state
|
|
/bot run --disable-fail-fast |
|
PR_Github #45097 [ run ] triggered by Bot. Commit: |
|
PR_Github #45097 [ run ] completed with state |
Summary by CodeRabbit
Description
This change fixes the DeepSeek-V3-Lite fp8 chunked prefill path on Blackwell SM100/SM103 GPUs.
For
TestDeepSeekV3Lite::test_chunked_prefill, the fp8 checkpoint was previously using the default MoE backend selection path. On B200/B300, that could fall back to a CUTLASS fp8 block-scale MoE path that eventually hits a Hopper-only deep_gemm kernel path and fails at runtime.
This PR explicitly selects
DEEPGEMMfor the fp8 chunked prefill case on SM100/SM103, which matches the expected backend support for DeepSeek-V3-Lite FP8 block-scale MoE on Blackwell.In addition, this PR adds a B300 single-GPU coverage entry for:
accuracy/test_llm_api_pytorch.py::TestDeepSeekV3Lite::test_chunked_prefill[quant_dtype=fp8-kv_cache_reuse=True-fp8kv=True-overlap_scheduler=True]Since the issue is now fixed, the corresponding waiver is also removed.
Test Coverage
Verified on B300 single-GPU:
accuracy/test_llm_api_pytorch.py::TestDeepSeekV3Lite::test_chunked_prefill[quant_dtype=fp8-kv_cache_reuse=True-fp8kv=True-overlap_scheduler=True]PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
To see a list of available CI bot commands, please comment
/bot help.