[TRTLLM-10962][feat] Refactor video encoding to use ffmpeg CLI or pur…#11672
[TRTLLM-10962][feat] Refactor video encoding to use ffmpeg CLI or pur…#11672JunyiXu-nv merged 4 commits intoNVIDIA:mainNVIDIA/TensorRT-LLM:mainfrom JunyiXu-nv:dev-junyi-add-video-encoding-methodsJunyiXu-nv/TensorRT-LLM:dev-junyi-add-video-encoding-methodsCopy head branch name to clipboard
Conversation
…e Python MJPEG/AVI encoder - Simplify media_storage.py to keep only FfmpegCliEncoder and add PurePythonEncoder - PurePythonEncoder outputs MJPEG codec in AVI container (no external dependencies) - Fix save_video() to return actual output path from format-specific save methods - Update openai_server.py to handle both .mp4 and .avi output formats - Add output_path field to VideoJob model for tracking actual file path - Update sync/async video gen examples to detect Content-Type and adjust filename - Update README to remove PyAV dependency and document video encoding options Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
|
/bot run |
📝 WalkthroughWalkthroughThis PR introduces a pluggable video encoding system with two implementations: FFmpeg-based H.264 MP4 encoding and pure Python MJPEG-in-AVI fallback. Runtime format detection and path adjustment are implemented across client and server components to handle dynamically selected encoders. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
tensorrt_llm/serve/openai_server.py (1)
1628-1633:⚠️ Potential issue | 🔴 CriticalReturn value of
create_error_responseis silently discarded in background task.
_generate_video_backgroundruns as a fire-and-forgetasyncio.create_task. Whenoutput.video is None, this code returnscreate_error_response(...)— but no caller awaits or uses this return value. The job status remains unchanged (stays "queued"), so the client will poll forever and never see a failure.This should instead update the job status to "failed", consistent with the
exceptblock below.Proposed fix
if output.video is None: - return self.create_error_response( - message="Video generation failed", - err_type="InternalServerError", - status_code=HTTPStatus.INTERNAL_SERVER_ERROR, - ) + job = await VIDEO_STORE.get(video_id) + if job: + job.status = "failed" + job.completed_at = int(time.time()) + job.error = "Video generation failed: output.video is None" + await VIDEO_STORE.upsert(video_id, job) + return🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tensorrt_llm/serve/openai_server.py` around lines 1628 - 1633, When output.video is None inside _generate_video_background the code currently returns create_error_response(...) which is discarded by the background task; instead update the job status to "failed" and persist that update using the same mechanism used in the except block below (so the client polling sees a terminal failure), log the error, and then return/exit the background task. Ensure you reference the same job record and status-update function used in the except handler rather than relying on the discarded create_error_response return value.tensorrt_llm/serve/media_storage.py (2)
691-698:⚠️ Potential issue | 🟠 MajorPNG fallback path is incorrect when encoder already changed extension to
.avi.If
PurePythonEncoderwas selected, line 683 changesoutput_pathfrom.mp4to.avi. If the encoder then throws, the fallback at line 697 doesoutput_path.replace(".mp4", ".png")— but the path is now.avi, so the replace is a no-op. The PNG image gets written to a file with.aviextension.Proposed fix — use `os.path.splitext` instead of string replace
except Exception as e: logger.error(f"Error encoding video: {e}") import traceback logger.error(traceback.format_exc()) logger.warning("Falling back to saving middle frame as PNG.") - png_path = output_path.replace(".mp4", ".png") + png_path = os.path.splitext(output_path)[0] + ".png" return MediaStorage._save_middle_frame(video, png_path)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tensorrt_llm/serve/media_storage.py` around lines 691 - 698, The fallback writes a PNG using output_path.replace(".mp4", ".png") which fails if earlier code (e.g., PurePythonEncoder) changed the extension to .avi; update the except block in MediaStorage (the function that encodes video and calls MediaStorage._save_middle_frame) to compute the PNG path by replacing the file extension using os.path.splitext(output_path)[0] + ".png" (or equivalent) rather than string replace, and ensure you import os if not already present so the PNG is written with the correct .png extension.
625-658:⚠️ Potential issue | 🔴 Critical
convert_video_to_bytesreads from the original temp path, butsave_videomay write to a different path.When
PurePythonEncoderis selected,save_videochanges the extension from.mp4to.avi, writing the file at a different path thantmp_path. The code then tries to read from the originaltmp_path(which doesn't exist) and the cleanup also targets the wrong file.Proposed fix
try: # Save to temporary file - MediaStorage.save_video(video, tmp_path, audio, frame_rate, format) + actual_path = MediaStorage.save_video(video, tmp_path, audio, frame_rate, format) # Read bytes - with open(tmp_path, "rb") as f: + with open(actual_path, "rb") as f: video_bytes = f.read() return video_bytes finally: # Clean up temporary file - if os.path.exists(tmp_path): - os.unlink(tmp_path) + if os.path.exists(actual_path): + os.unlink(actual_path) + if actual_path != tmp_path and os.path.exists(tmp_path): + os.unlink(tmp_path)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tensorrt_llm/serve/media_storage.py` around lines 625 - 658, convert_video_to_bytes assumes save_video writes to tmp_path but save_video (when using PurePythonEncoder) can change the extension and write to a different file; update the data flow so convert_video_to_bytes reads and cleans up the actual file written: modify save_video (or its callers) to return the final output path (or accept and use a guaranteed final path), then in convert_video_to_bytes call MediaStorage.save_video(...) and capture the returned final_path (or compute the final path based on encoder), read bytes from that final_path, and delete that final_path in the finally block (instead of tmp_path); reference functions: convert_video_to_bytes, MediaStorage.save_video, and encoder PurePythonEncoder.
🧹 Nitpick comments (7)
tensorrt_llm/serve/media_storage.py (4)
451-456: New encoder instance created on every call.
get_video_encoder()creates a newFfmpegCliEncoder()orPurePythonEncoder()on each invocation. Since these are stateless, consider caching the instance or making them singletons to avoid unnecessary allocations on every video save.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tensorrt_llm/serve/media_storage.py` around lines 451 - 456, get_video_encoder currently instantiates a new FfmpegCliEncoder or PurePythonEncoder on every call; since these encoders are stateless, change get_video_encoder to return a cached singleton instance instead (e.g. store one module-level variable and initialize it on first call). Update get_video_encoder to check the cached variable and only create FfmpegCliEncoder or PurePythonEncoder once (referencing the existing function get_video_encoder and classes FfmpegCliEncoder / PurePythonEncoder) so subsequent calls reuse the same instance.
26-39: Minor:subprocess.runwith partial executable path.Static analysis flags
["ffmpeg", "-version"](S607) for using a partial path. This is standard practice for tool detection and poses no real security risk here since the command is hardcoded. You can suppress the warning with a# noqa: S607comment if desired.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tensorrt_llm/serve/media_storage.py` around lines 26 - 39, The static analysis warning for using a partial executable path in the subprocess.run call inside _check_ffmpeg_available is a false positive; suppress it by adding a "# noqa: S607" comment to the subprocess.run invocation (or its argument line) where ["ffmpeg", "-version"] is passed so the linter ignores this specific S607 warning while leaving the rest of the function unchanged.
441-456:get_video_encodernever returnsNone, but docstring and callers check for it.The function always returns either
FfmpegCliEncoder()orPurePythonEncoder(). The docstring says it may returnNone, and_save_mp4(line 680) has a deadelsebranch handlingNone. Consider updating the docstring and return type toVideoEncoder(non-optional), or keep theNonepath only if you anticipate a future scenario where both encoders might be unavailable.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tensorrt_llm/serve/media_storage.py` around lines 441 - 456, The get_video_encoder function never returns None (it always returns FfmpegCliEncoder() or PurePythonEncoder()), so update its signature and docstring to return VideoEncoder (non-Optional) and remove the misleading "or None" text; then remove the dead None-handling branch in _save_mp4 (and any other callers that check for None) so callers assume a valid VideoEncoder is returned; keep the use of _check_ffmpeg_available(), FfmpegCliEncoder, and PurePythonEncoder as-is.
214-216: Chain the exception withfromfor proper traceback.Per static analysis (B904), re-raising in an
exceptblock should chain the original exception.Proposed fix
except FileNotFoundError: - raise RuntimeError("ffmpeg not found. Install ffmpeg for video encoding.") + raise RuntimeError("ffmpeg not found. Install ffmpeg for video encoding.") from None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tensorrt_llm/serve/media_storage.py` around lines 214 - 216, The except FileNotFoundError block in media_storage.py should preserve the original traceback: change the handler to capture the original exception (e.g., use "except FileNotFoundError as e") and re-raise the RuntimeError using exception chaining (raise RuntimeError("ffmpeg not found. Install ffmpeg for video encoding.") from e); locate this change around the FileNotFoundError handler inside the video encoding function to ensure proper traceback propagation.tensorrt_llm/serve/openai_server.py (2)
1809-1819: Same redundanthasattrcheck indelete_video.Same as
get_video_content— simplify toif job.output_path and os.path.exists(job.output_path).Proposed simplification
- if hasattr(job, 'output_path') and job.output_path and os.path.exists(job.output_path): + if job.output_path and os.path.exists(job.output_path):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tensorrt_llm/serve/openai_server.py` around lines 1809 - 1819, The delete_video function contains a redundant hasattr(job, 'output_path') check similar to get_video_content; simplify the conditional to check job.output_path directly and os.path.exists(job.output_path) (e.g., replace "if hasattr(job, 'output_path') and job.output_path and os.path.exists(job.output_path):" with "if job.output_path and os.path.exists(job.output_path):") while preserving the subsequent fallback that scans self.media_storage_path for extensions; update references to video_path accordingly so logic and variable names remain unchanged.
1751-1761:hasattr(job, 'output_path')is unnecessary for a Pydantic model field.
output_pathis declared onVideoJobwithdefault=None, so the attribute always exists. Thehasattrcheck is misleading — just checkjob.output_pathdirectly.Proposed simplification
- if hasattr(job, 'output_path') and job.output_path and os.path.exists(job.output_path): + if job.output_path and os.path.exists(job.output_path):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tensorrt_llm/serve/openai_server.py` around lines 1751 - 1761, The hasattr(job, 'output_path') check is redundant for the Pydantic VideoJob (output_path defaults to None); simplify the logic by directly checking job.output_path and os.path.exists(job.output_path) to set video_path, otherwise fall back to iterating extensions using self.media_storage_path / f"{video_id}{ext}"; update the block that assigns video_path (referencing job.output_path, video_id, self.media_storage_path) to remove hasattr and rely on truthiness of job.output_path.examples/visual_gen/serve/async_video_gen.py (1)
131-143: Content-Type header access is fragile withgetattrchain.The
getattr(content.response, "headers", {})works but depends on the OpenAI SDK's internalcontent.responsestructure. If the SDK changes,content.responsemight not have a.headersattribute and the fallback{}would silently default to MP4. This is acceptable for an example script, but worth noting.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/visual_gen/serve/async_video_gen.py` around lines 131 - 143, The code fragility stems from accessing headers via getattr(content.response, "headers", {}) which can silently fall back to {} if content.response or its headers attribute is missing; update the access in the block that computes content_type so it safely checks for the existence of response and headers (e.g., use a small safe accessor or try/except AttributeError to read content.response.headers or response.get("headers") when available), then derive content_type from that safe headers value before deciding actual_ext and possibly updating output_file/output_path; specifically update the logic around content, content.response, content.response.headers, content_type, actual_ext, output_file and output_path to avoid silent defaults and handle missing headers explicitly (log a warning or default to "video/mp4" only when headers truly absent).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tensorrt_llm/serve/media_storage.py`:
- Around line 85-120: The current shape-heuristic in the audio normalization
block (variable audio_tensor and its ndim branches) misidentifies which axis is
channels by checking shape[0] == 2; instead, determine the channel axis by
comparing dimension sizes: treat the smaller dimension as channels if it is
reasonably small (e.g., <= 8) and the larger dimension as samples (e.g., > 8);
if neither dimension meets that heuristic, raise a clear ValueError asking
callers to provide (samples, channels) format. Update the ndim==2 and ndim==3
branches to use this size-based detection before transposing or slicing to two
channels, remove the redundant final if audio_tensor.shape[1] > 2 slice by
consolidating channel clipping into the same logic, and add a short comment
documenting the expected (samples, channels) convention and the fallback
ambiguity error.
In `@tensorrt_llm/serve/openai_protocol.py`:
- Around line 1330-1331: The VideoJob schema exposes an internal filesystem path
via the Field named output_path which gets serialized through model_dump() in
get_video_metadata and list_videos; update the VideoJob definition in
openai_protocol.py to prevent leaking by marking output_path as excluded from
serialization (e.g., Field(..., exclude=True)) or convert it to a private
attribute (PrivateAttr) so it remains server-only, and run the endpoints that
call model_dump() (get_video_metadata, list_videos) to verify the path no longer
appears in serialized output.
---
Outside diff comments:
In `@tensorrt_llm/serve/media_storage.py`:
- Around line 691-698: The fallback writes a PNG using
output_path.replace(".mp4", ".png") which fails if earlier code (e.g.,
PurePythonEncoder) changed the extension to .avi; update the except block in
MediaStorage (the function that encodes video and calls
MediaStorage._save_middle_frame) to compute the PNG path by replacing the file
extension using os.path.splitext(output_path)[0] + ".png" (or equivalent) rather
than string replace, and ensure you import os if not already present so the PNG
is written with the correct .png extension.
- Around line 625-658: convert_video_to_bytes assumes save_video writes to
tmp_path but save_video (when using PurePythonEncoder) can change the extension
and write to a different file; update the data flow so convert_video_to_bytes
reads and cleans up the actual file written: modify save_video (or its callers)
to return the final output path (or accept and use a guaranteed final path),
then in convert_video_to_bytes call MediaStorage.save_video(...) and capture the
returned final_path (or compute the final path based on encoder), read bytes
from that final_path, and delete that final_path in the finally block (instead
of tmp_path); reference functions: convert_video_to_bytes,
MediaStorage.save_video, and encoder PurePythonEncoder.
In `@tensorrt_llm/serve/openai_server.py`:
- Around line 1628-1633: When output.video is None inside
_generate_video_background the code currently returns create_error_response(...)
which is discarded by the background task; instead update the job status to
"failed" and persist that update using the same mechanism used in the except
block below (so the client polling sees a terminal failure), log the error, and
then return/exit the background task. Ensure you reference the same job record
and status-update function used in the except handler rather than relying on the
discarded create_error_response return value.
---
Nitpick comments:
In `@examples/visual_gen/serve/async_video_gen.py`:
- Around line 131-143: The code fragility stems from accessing headers via
getattr(content.response, "headers", {}) which can silently fall back to {} if
content.response or its headers attribute is missing; update the access in the
block that computes content_type so it safely checks for the existence of
response and headers (e.g., use a small safe accessor or try/except
AttributeError to read content.response.headers or response.get("headers") when
available), then derive content_type from that safe headers value before
deciding actual_ext and possibly updating output_file/output_path; specifically
update the logic around content, content.response, content.response.headers,
content_type, actual_ext, output_file and output_path to avoid silent defaults
and handle missing headers explicitly (log a warning or default to "video/mp4"
only when headers truly absent).
In `@tensorrt_llm/serve/media_storage.py`:
- Around line 451-456: get_video_encoder currently instantiates a new
FfmpegCliEncoder or PurePythonEncoder on every call; since these encoders are
stateless, change get_video_encoder to return a cached singleton instance
instead (e.g. store one module-level variable and initialize it on first call).
Update get_video_encoder to check the cached variable and only create
FfmpegCliEncoder or PurePythonEncoder once (referencing the existing function
get_video_encoder and classes FfmpegCliEncoder / PurePythonEncoder) so
subsequent calls reuse the same instance.
- Around line 26-39: The static analysis warning for using a partial executable
path in the subprocess.run call inside _check_ffmpeg_available is a false
positive; suppress it by adding a "# noqa: S607" comment to the subprocess.run
invocation (or its argument line) where ["ffmpeg", "-version"] is passed so the
linter ignores this specific S607 warning while leaving the rest of the function
unchanged.
- Around line 441-456: The get_video_encoder function never returns None (it
always returns FfmpegCliEncoder() or PurePythonEncoder()), so update its
signature and docstring to return VideoEncoder (non-Optional) and remove the
misleading "or None" text; then remove the dead None-handling branch in
_save_mp4 (and any other callers that check for None) so callers assume a valid
VideoEncoder is returned; keep the use of _check_ffmpeg_available(),
FfmpegCliEncoder, and PurePythonEncoder as-is.
- Around line 214-216: The except FileNotFoundError block in media_storage.py
should preserve the original traceback: change the handler to capture the
original exception (e.g., use "except FileNotFoundError as e") and re-raise the
RuntimeError using exception chaining (raise RuntimeError("ffmpeg not found.
Install ffmpeg for video encoding.") from e); locate this change around the
FileNotFoundError handler inside the video encoding function to ensure proper
traceback propagation.
In `@tensorrt_llm/serve/openai_server.py`:
- Around line 1809-1819: The delete_video function contains a redundant
hasattr(job, 'output_path') check similar to get_video_content; simplify the
conditional to check job.output_path directly and
os.path.exists(job.output_path) (e.g., replace "if hasattr(job, 'output_path')
and job.output_path and os.path.exists(job.output_path):" with "if
job.output_path and os.path.exists(job.output_path):") while preserving the
subsequent fallback that scans self.media_storage_path for extensions; update
references to video_path accordingly so logic and variable names remain
unchanged.
- Around line 1751-1761: The hasattr(job, 'output_path') check is redundant for
the Pydantic VideoJob (output_path defaults to None); simplify the logic by
directly checking job.output_path and os.path.exists(job.output_path) to set
video_path, otherwise fall back to iterating extensions using
self.media_storage_path / f"{video_id}{ext}"; update the block that assigns
video_path (referencing job.output_path, video_id, self.media_storage_path) to
remove hasattr and rely on truthiness of job.output_path.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
examples/visual_gen/serve/README.mdexamples/visual_gen/serve/async_video_gen.pyexamples/visual_gen/serve/sync_video_gen.pytensorrt_llm/serve/media_storage.pytensorrt_llm/serve/openai_protocol.pytensorrt_llm/serve/openai_server.py
|
PR_Github #36607 [ run ] triggered by Bot. Commit: |
…s (B607) and add nosec comment for subprocess import (B404). Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
|
/bot run |
|
PR_Github #36617 [ run ] triggered by Bot. Commit: |
|
PR_Github #36617 [ run ] completed with state |
Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
|
/bot run |
|
PR_Github #37000 [ run ] triggered by Bot. Commit: |
zhenhuaw-me
left a comment
There was a problem hiding this comment.
TRTLLM-11184 to follow up on the video format issue.
|
PR_Github #37000 [ run ] completed with state
|
|
/bot run |
|
PR_Github #37139 [ run ] triggered by Bot. Commit: |
|
/bot run --disable-fail-fast |
|
PR_Github #37224 [ run ] triggered by Bot. Commit: |
|
PR_Github #37224 [ run ] completed with state
|
|
/bot run |
|
PR_Github #37237 [ run ] triggered by Bot. Commit: |
|
PR_Github #37237 [ run ] completed with state |
NVIDIA#11672) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
NVIDIA#11672) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
NVIDIA#11672) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
…e Python MJPEG/AVI encoder
Summary by CodeRabbit
New Features
Documentation
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
Details
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.