Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Add GRPO callbacks for OLMo-core Trainer (GRPO olmo-core: PR 3 of 5)#1397

Merged
finbarrtimbers merged 1 commit intomainallenai/open-instruct:mainfrom
finbarr/grpo-callbacks-moduleallenai/open-instruct:finbarr/grpo-callbacks-moduleCopy head branch name to clipboard
Mar 12, 2026
Merged

Add GRPO callbacks for OLMo-core Trainer (GRPO olmo-core: PR 3 of 5)#1397
finbarrtimbers merged 1 commit intomainallenai/open-instruct:mainfrom
finbarr/grpo-callbacks-moduleallenai/open-instruct:finbarr/grpo-callbacks-moduleCopy head branch name to clipboard

Conversation

@finbarrtimbers
Copy link
Copy Markdown
Collaborator

@finbarrtimbers finbarrtimbers commented Jan 20, 2026

Adds callbacks for GRPO training with OLMo-core's Trainer:

  • VLLMWeightSyncCallback: syncs weights to vLLM engines after each step
  • RefPolicyUpdateCallback: Polyak averaging for reference policy updates

Based on PR #1412 (GRPOTrainModule)

GPU_TESTS=01KFKCWJQKNEB71EZSA93XRKWF

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @finbarrtimbers, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces foundational components for Group Relative Policy Optimization (GRPO) within the OLMo-core framework. It establishes new callback mechanisms for managing model weight synchronization with vLLM inference engines and updating reference policies, alongside a dedicated training module that implements the GRPO algorithm. These additions lay the groundwork for advanced reinforcement learning from human feedback (RLHF) training workflows, specifically for the OLMo-core actor.

Highlights

  • New GRPO Callbacks: Introduced grpo_callbacks.py containing VLLMWeightSyncCallback for synchronizing model weights to vLLM inference engines, RefPolicyUpdateCallback for Polyak averaging of reference policy, and DataPreparationActorCheckpointCallback for managing actor state during checkpointing.
  • GRPO Training Module: Added olmo_core_train_modules.py with GRPOTrainModule, which integrates Group Relative Policy Optimization (GRPO) training into the OLMo-core framework. This module supports PPO-style training, various loss functions (DAPO/CISPO), KL penalty computation, and importance sampling with clipping.
  • HuggingFace Name Mapping Utility: Included a utility function olmo_core_to_hf_name to convert OLMo-core parameter names to HuggingFace format, facilitating compatibility with Qwen3/LLaMA models.
  • Type Checking Integration: Updated pyproject.toml to include the newly added GRPO callback and training module files for static type checking, ensuring code quality and maintainability.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 3a8a137290

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread open_instruct/olmo_core_train_modules.py Outdated
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces GRPO-specific callbacks and a training module for OLMo-core, which are foundational components for the OLMo-core actor. The changes include VLLMWeightSyncCallback for synchronizing weights, RefPolicyUpdateCallback for Polyak updates, and GRPOTrainModule for the GRPO training algorithm. The pyproject.toml file has been updated to include these new files for type checking. Overall, the code is well-structured and follows existing patterns, but there are a few areas for improvement regarding type safety, code duplication, and error handling specificity.

Comment thread open_instruct/grpo_callbacks.py Outdated
Comment thread open_instruct/grpo_callbacks.py Outdated
Comment thread open_instruct/grpo_callbacks.py Outdated
Comment thread open_instruct/grpo_callbacks.py Outdated
Comment thread open_instruct/grpo_callbacks.py
Comment thread open_instruct/olmo_core_train_modules.py Outdated
@finbarrtimbers finbarrtimbers force-pushed the finbarr/grpo-callbacks-module branch from e48dd77 to cd3503f Compare January 20, 2026 22:45
@finbarrtimbers finbarrtimbers force-pushed the finbarr/grpo-utils-config branch from f1f3628 to 691668a Compare January 20, 2026 22:53
Base automatically changed from finbarr/grpo-utils-config to main January 20, 2026 23:43
@finbarrtimbers finbarrtimbers force-pushed the finbarr/grpo-callbacks-module branch 3 times, most recently from 8c2e075 to 0baad0b Compare January 22, 2026 17:23
@finbarrtimbers finbarrtimbers changed the title Add GRPO callbacks and training module (GRPO olmo-core implementation: PR 2 of 4) Add GRPO callbacks for OLMo-core Trainer (GRPO olmo-core: PR 3 of 5) Jan 22, 2026
@finbarrtimbers finbarrtimbers changed the base branch from main to finbarr/grpo-train-module January 26, 2026 15:37
@finbarrtimbers finbarrtimbers force-pushed the finbarr/grpo-callbacks-module branch from 563461f to 477fe56 Compare January 26, 2026 16:25
Comment thread open_instruct/grpo_callbacks.py Outdated
Comment thread open_instruct/vllm_utils.py
Copy link
Copy Markdown
Collaborator

@hamishivi hamishivi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Happy to merge this, although probably we want to more thoroughly test these code chunks once more the the GRPO implementation is in.

@finbarrtimbers
Copy link
Copy Markdown
Collaborator Author

Happy to merge this, although probably we want to more thoroughly test these code chunks once more the the GRPO implementation is in.

Agreed! I did add some test, but I agree completely.

@finbarrtimbers finbarrtimbers force-pushed the finbarr/grpo-train-module branch from edfb0e3 to 63008d7 Compare January 26, 2026 22:00
Base automatically changed from finbarr/grpo-train-module to main February 2, 2026 15:08
…R 3 of 5) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@finbarrtimbers finbarrtimbers force-pushed the finbarr/grpo-callbacks-module branch from fdd29d8 to 92963ef Compare March 11, 2026 23:57
@finbarrtimbers finbarrtimbers added this pull request to the merge queue Mar 12, 2026
Merged via the queue into main with commit f4a0b51 Mar 12, 2026
6 of 7 checks passed
@finbarrtimbers finbarrtimbers deleted the finbarr/grpo-callbacks-module branch March 12, 2026 14:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Morty Proxy This is a proxified and sanitized view of the page, visit original site.