Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Wait for vLLM update_weight RPCs before unpausing actors in weight sync#1480

Merged
hamishivi merged 4 commits intoallenai:mainallenai/open-instruct:mainfrom
MohdElgaar:fix/vllm-update-weight-before-unpauseMohdElgaar/open-instruct:fix/vllm-update-weight-before-unpauseCopy head branch name to clipboard
Feb 19, 2026
Merged

Wait for vLLM update_weight RPCs before unpausing actors in weight sync#1480
hamishivi merged 4 commits intoallenai:mainallenai/open-instruct:mainfrom
MohdElgaar:fix/vllm-update-weight-before-unpauseMohdElgaar/open-instruct:fix/vllm-update-weight-before-unpauseCopy head branch name to clipboard

Conversation

@MohdElgaar
Copy link
Copy Markdown
Contributor

The main thread could hang in health_check_fn() on check_background_threads.remote() because should_stop was set to False before all vLLM update_weight RPCs finished. vLLM engines stayed busy with queued updates and never processed the health check.

Fix: Wait for all engine.update_weight ObjectRefs returned by broadcast_to_vllm() to complete before calling set_should_stop(False), so vLLM engines are idle before the health check runs.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @MohdElgaar, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a critical synchronization issue where the main thread could hang during health checks because vLLM engines were still processing queued weight updates. The change ensures that all vLLM update_weight RPCs are fully completed before actors are unpaused, guaranteeing that the engines are in an idle state and preventing the health check from blocking indefinitely.

Highlights

  • Prevented Health Check Hangs: Introduced a new waiting mechanism to ensure vLLM engines complete all weight update RPCs before actors are unpaused, thereby preventing potential hangs during health checks.
  • Improved Synchronization Logic: Added an explicit wait for engine.update_weight ObjectRefs returned by broadcast_to_vllm(), ensuring vLLM engines are idle before the should_stop flag is reset to False.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • open_instruct/grpo_fast.py
    • Added a new waiting step to ensure all vLLM engine update_weight RPCs are completed before unpausing actors.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a potential deadlock in the weight synchronization thread by ensuring all vLLM update_weight RPCs are completed before actors are unpaused. The fix involves collecting all ObjectRefs from the weight broadcast results and explicitly waiting for them to finish. The logic is sound and correctly resolves the described issue. I've added one minor suggestion to improve code conciseness.

Comment thread open_instruct/grpo_fast.py Outdated
@MohdElgaar
Copy link
Copy Markdown
Contributor Author

Note: this is only when inflight_updates=False. The stall never happens when inflight_updates=True.

def _prepare_weight_update(self, name: str, dtype: str) -> None:
# Wait for all active requests to complete.
while not self.inflight_updates and len(self.active_tasks) > 0:
self.check_background_threads()
time.sleep(DRAIN_ACTIVE_TASKS_SLEEP_S)
expected_dtype = str(self.llm_engine.model_config.dtype)
assert dtype == expected_dtype, f"Mismatched dtype for {name}: received {dtype!r}, expected {expected_dtype!r}"

@MohdElgaar MohdElgaar force-pushed the fix/vllm-update-weight-before-unpause branch from 08f3505 to a8933de Compare February 19, 2026 15:27
Copy link
Copy Markdown
Collaborator

@hamishivi hamishivi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you so much for the PR! LGTM!

MohdElgaar and others added 2 commits February 19, 2026 14:29
…ht sync

In some instances, should_stop was set to False before all vLLM engine update_weight
RPCs had completed. As a result:
- vLLM engine(s) never complete the update_weight tasks
- check_background_threads.remote() could not be processed
- Main thread blocked indefinitely in health_check_fn()

Fix: Wait for all engine.update_weight ObjectRefs (returned by broadcast_to_vllm)
to complete before calling set_should_stop(False). This guarantees vLLM engines
are idle before the health check runs check_background_threads.
@hamishivi
Copy link
Copy Markdown
Collaborator

@MohdElgaar if you add to the changelog and fix the quality checks I'm happy to merge!

hamishivi and others added 2 commits February 19, 2026 14:41
Add CHANGELOG.md entry for PR allenai#1480 and apply ruff format to
grpo_fast.py to pass CI checks.

Co-authored-by: Cursor <cursoragent@cursor.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
@hamishivi hamishivi enabled auto-merge February 19, 2026 22:46
@hamishivi hamishivi added this pull request to the merge queue Feb 19, 2026
Merged via the queue into allenai:main with commit 8dfe8f2 Feb 19, 2026
7 checks passed
mnoukhov pushed a commit that referenced this pull request Feb 25, 2026
…nc (#1480)

* fix: wait for vLLM update_weight RPCs before unpausing actors in weight sync

In some instances, should_stop was set to False before all vLLM engine update_weight
RPCs had completed. As a result:
- vLLM engine(s) never complete the update_weight tasks
- check_background_threads.remote() could not be processed
- Main thread blocked indefinitely in health_check_fn()

Fix: Wait for all engine.update_weight ObjectRefs (returned by broadcast_to_vllm)
to complete before calling set_should_stop(False). This guarantees vLLM engines
are idle before the health check runs check_background_threads.

* fix: add changelog entry and fix ruff formatting

Add CHANGELOG.md entry for PR #1480 and apply ruff format to
grpo_fast.py to pass CI checks.

Co-authored-by: Cursor <cursoragent@cursor.com>

---------

Co-authored-by: Hamish Ivison <hamishivi@gmail.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Morty Proxy This is a proxified and sanitized view of the page, visit original site.