Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Clean up OLMo 3.X tokenizer docs, create instruct-release tokenizer#1487

Merged
natolambert merged 14 commits intomainallenai/open-instruct:mainfrom
docs-olmo3-tokenizer-cleanupallenai/open-instruct:docs-olmo3-tokenizer-cleanupCopy head branch name to clipboard
Feb 27, 2026
Merged

Clean up OLMo 3.X tokenizer docs, create instruct-release tokenizer#1487
natolambert merged 14 commits intomainallenai/open-instruct:mainfrom
docs-olmo3-tokenizer-cleanupallenai/open-instruct:docs-olmo3-tokenizer-cleanupCopy head branch name to clipboard

Conversation

@natolambert
Copy link
Copy Markdown
Collaborator

@natolambert natolambert commented Feb 19, 2026

Summary

  • Rewrites the "OLMo 3.X and future models" section into clear per-stage bullet points (Think SFT, Think eval, Think release, Instruct release).
  • Adds note about plan to fix the <think> masking bug workaround.
  • Renames think tokenizers from olmo-3 to olmo-3.2 since they include function calling (differs from original OLMo 3 think tokenizers).
  • Creates allenai/olmo-3-tokenizer-instruct-release on HuggingFace (same as instruct-dev but with Olmo identity system prompt).
  • Updates TLDR section to reference the four canonical tokenizer names instead of old dolma2 links. Fixes scoping so Think SFT clearly uses instruct-dev (not think-dev).
  • Moves tokenizer/chat-template scripts into scripts/tokenizers/: diff_tokenizers.py, export_chat_template.py, test_chat_templates.py, visualize_tokenization.py, render_chat_template_examples.py.
  • Adds --rev-a/--rev-b flags to diff_tokenizers.py for comparing the same repo at two different commits.
  • Fixes all 4 tokenizers: when tools is an empty list [], the template now correctly omits <functions> (previously only checked for none, so [] would render <functions>[]</functions>).

Tokenizer matrix

dev (training/eval) release (public)
Instruct olmo-3-tokenizer-instruct-dev olmo-3-tokenizer-instruct-release (new)
Think olmo-3.2-tokenizer-think-dev (renamed) olmo-3.2-tokenizer-think-release (renamed)

Test plan

  • Verify instruct-release differs from instruct-dev only in system prompt via diff_tokenizers.py
  • Verify think-release differs from think-dev only in system prompt via diff_tokenizers.py
  • Verify tools=[] fix via --rev-a/--rev-b diff on each tokenizer
  • Verify docs render correctly with uv run mkdocs serve

🤖 Generated with Claude Code

natolambert and others added 2 commits February 19, 2026 15:21
- Rewrite the OLMo 3.X section into clear per-stage bullet points
  (Think SFT, Think eval, Think release, Instruct release)
- Note plan to fix the <think> masking bug workaround
- Add TODO to consider renaming think-dev/release since they include
  function calling (differs from original OLMo 3 think tokenizers)
- Create allenai/olmo-3-tokenizer-instruct-release on HuggingFace
  (instruct-dev + Olmo identity system prompt)
- Update TLDR section to reference the four canonical tokenizer names

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @natolambert, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the clarity and standardization of tokenizer usage for OLMo 3.X models. It refines the documentation to provide a structured guide for different stages of model development and deployment, introduces a new release-ready tokenizer, and addresses potential ambiguities in tokenizer naming and functionality. The changes aim to streamline the process of working with OLMo 3.X tokenizers for both training and inference.

Highlights

  • Tokenizer Documentation Clarity: Rewrote the "OLMo 3.X and future models" section into clear, per-stage bullet points for SFT, evaluation, and release tokenizers.
  • Think Tokenization Workaround: Added a note about the plan to fix the masking bug workaround, indicating it's a temporary solution.
  • Tokenizer Naming Convention: Included a TODO to consider renaming think-dev/think-release tokenizers due to their inclusion of function calling, which differs from original OLMo 3 think tokenizers.
  • New Instruct Release Tokenizer: Created and documented the allenai/olmo-3-tokenizer-instruct-release on HuggingFace, which is similar to instruct-dev but includes the Olmo identity system prompt.
  • TLDR Section Update: Updated the TLDR section to reference the four canonical tokenizer names, replacing older dolma2 links for better clarity and consistency.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • CHANGELOG.md
    • Cleaned up OLMo 3.X tokenizer documentation, clarified the think SFT tokenization workaround, added a dev/release tokenizer matrix, and created the allenai/olmo-3-tokenizer-instruct-release.
Activity
  • The pull request was generated using Claude Code.
  • A test plan was provided to verify documentation rendering and the correct loading of the new tokenizer.
  • A decision point was noted regarding the renaming of think-dev/think-release tokenizers.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request significantly improves the clarity and organization of the OLMo 3.X tokenizer documentation. The changes, including rewriting a key section into a clear bulleted list and updating the TLDR with canonical tokenizer names, make the information much more accessible. The addition of the instruct-release tokenizer is also noted. I have one minor suggestion to update a placeholder in the changelog before merging.

Comment thread CHANGELOG.md
@github-actions
Copy link
Copy Markdown
Contributor

Documentation Changes Detected

📄 olmo3/index.html
--- site-base/olmo3/index.html	2026-02-19 23:23:24.534448369 +0000
+++ site-pr/olmo3/index.html	2026-02-19 23:23:22.195434127 +0000
@@ -1052,15 +1052,18 @@
 </ul>
 <p><strong>Olmo 3.X and future models:</strong></p>
 <ul>
-<li>Tokenized with the same Instruct chat template  <a href="https://huggingface.co/allenai/olmo-3-tokenizer-instruct-dev"><code>allenai/olmo-3-tokenizer-instruct-dev</code></a> (since no <think>, which is our hack for tokenization to make sure the model learns how to generate <think> and it isn't masked out), ideally evaluated with a chat template <a href="https://huggingface.co/allenai/dolma2-tokenizer-special-tokens-reasoner-hybrid">here</a> that is the instruct chat template + think tokens (because new models should combine the tool use abilities from instruct chat template with the <think> for thinking models). Final models, if successfully trained, should likely release with <a href="https://huggingface.co/allenai/dolma2-tokenizer-special-tokens-v5-lc-reasoner">this</a> which is the same as the evaluated one but with olmo identity.</li>
+<li><strong>Think SFT data</strong> is tokenized with the Instruct chat template <a href="https://huggingface.co/allenai/olmo-3-tokenizer-instruct-dev"><code>allenai/olmo-3-tokenizer-instruct-dev</code></a>. This template does not include <code>&lt;think&gt;</code>, which prevents <code>&lt;think&gt;</code> from being masked out during tokenization so the model learns to generate it. (We plan to fix the underlying masking bug so this workaround is no longer needed.)</li>
+<li><strong>Think evaluation</strong> should use <a href="https://huggingface.co/allenai/olmo-3-tokenizer-think-dev"><code>allenai/olmo-3-tokenizer-think-dev</code></a>, which is the instruct chat template plus <code>&lt;think&gt;</code> in <code>add_generation_prompt</code> (new models should combine tool use abilities from the instruct template with <code>&lt;think&gt;</code> for reasoning). (TODO: check if this tokenizer should be renamed to <code>olmo-3.X-tokenizer-think-dev</code> since it includes function calling in the template, which differs from the original OLMo 3 think tokenizers.)</li>
+<li><strong>Think release models</strong> should use <a href="https://huggingface.co/allenai/olmo-3-tokenizer-think-release"><code>allenai/olmo-3-tokenizer-think-release</code></a>, which is the same as the think-dev template but with the Olmo identity system prompt.</li>

Showing first 10 lines of diff for each changed file (up to 5 files, excluding search indices).

@github-actions
Copy link
Copy Markdown
Contributor

Documentation Changes Detected

📄 olmo3/index.html
--- site-base/olmo3/index.html	2026-02-19 23:23:33.248154121 +0000
+++ site-pr/olmo3/index.html	2026-02-19 23:23:30.897188125 +0000
@@ -1052,15 +1052,18 @@
 </ul>
 <p><strong>Olmo 3.X and future models:</strong></p>
 <ul>
-<li>Tokenized with the same Instruct chat template  <a href="https://huggingface.co/allenai/olmo-3-tokenizer-instruct-dev"><code>allenai/olmo-3-tokenizer-instruct-dev</code></a> (since no <think>, which is our hack for tokenization to make sure the model learns how to generate <think> and it isn't masked out), ideally evaluated with a chat template <a href="https://huggingface.co/allenai/dolma2-tokenizer-special-tokens-reasoner-hybrid">here</a> that is the instruct chat template + think tokens (because new models should combine the tool use abilities from instruct chat template with the <think> for thinking models). Final models, if successfully trained, should likely release with <a href="https://huggingface.co/allenai/dolma2-tokenizer-special-tokens-v5-lc-reasoner">this</a> which is the same as the evaluated one but with olmo identity.</li>
+<li><strong>Think SFT data</strong> is tokenized with the Instruct chat template <a href="https://huggingface.co/allenai/olmo-3-tokenizer-instruct-dev"><code>allenai/olmo-3-tokenizer-instruct-dev</code></a>. This template does not include <code>&lt;think&gt;</code>, which prevents <code>&lt;think&gt;</code> from being masked out during tokenization so the model learns to generate it. (We plan to fix the underlying masking bug so this workaround is no longer needed.)</li>
+<li><strong>Think evaluation</strong> should use <a href="https://huggingface.co/allenai/olmo-3-tokenizer-think-dev"><code>allenai/olmo-3-tokenizer-think-dev</code></a>, which is the instruct chat template plus <code>&lt;think&gt;</code> in <code>add_generation_prompt</code> (new models should combine tool use abilities from the instruct template with <code>&lt;think&gt;</code> for reasoning). (TODO: check if this tokenizer should be renamed to <code>olmo-3.X-tokenizer-think-dev</code> since it includes function calling in the template, which differs from the original OLMo 3 think tokenizers.)</li>
+<li><strong>Think release models</strong> should use <a href="https://huggingface.co/allenai/olmo-3-tokenizer-think-release"><code>allenai/olmo-3-tokenizer-think-release</code></a>, which is the same as the think-dev template but with the Olmo identity system prompt.</li>

Showing first 10 lines of diff for each changed file (up to 5 files, excluding search indices).

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 57c3e10be6

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread docs/olmo3.md Outdated
natolambert and others added 2 commits February 19, 2026 15:24
The think tokenizers include function calling (unlike original OLMo 3
think tokenizers), so rename to 3.2 to make this clear.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The TLDR bullet for Think incorrectly suggested using think-dev for
all training including SFT. Think SFT must use instruct-dev to avoid
the <think> masking bug. think-dev is only for evaluation and
post-SFT stages (DPO, RL).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@github-actions
Copy link
Copy Markdown
Contributor

Documentation Changes Detected

📄 olmo3/index.html
--- site-base/olmo3/index.html	2026-02-19 23:25:48.260797523 +0000
+++ site-pr/olmo3/index.html	2026-02-19 23:25:45.790818655 +0000
@@ -1050,17 +1050,20 @@
 <li><a href="https://huggingface.co/allenai/Olmo-3-32B-Think">32b</a>: Training data tokenized with the <code>olmo_thinker_no_think_sft_tokenization</code> chat template (otherwise identical, doesn't have olmo identity in the prompt), released with that chat template + the think token in <code>add_generation_prompt</code>.</li>
 <li>Reason for the difference between 7b and 32b: we learned as we went to not have the identity baked into the prompt (so it was easier to fix at the time of the demo in the form of a system prompt) but couldn't afford to retrain 7b thinking model at that point.</li>
 </ul>
-<p><strong>Olmo 3.X and future models:</strong></p>
+<p><strong>Olmo 3.2+ models:</strong></p>
 <ul>
-<li>Tokenized with the same Instruct chat template  <a href="https://huggingface.co/allenai/olmo-3-tokenizer-instruct-dev"><code>allenai/olmo-3-tokenizer-instruct-dev</code></a> (since no <think>, which is our hack for tokenization to make sure the model learns how to generate <think> and it isn't masked out), ideally evaluated with a chat template <a href="https://huggingface.co/allenai/dolma2-tokenizer-special-tokens-reasoner-hybrid">here</a> that is the instruct chat template + think tokens (because new models should combine the tool use abilities from instruct chat template with the <think> for thinking models). Final models, if successfully trained, should likely release with <a href="https://huggingface.co/allenai/dolma2-tokenizer-special-tokens-v5-lc-reasoner">this</a> which is the same as the evaluated one but with olmo identity.</li>

Showing first 10 lines of diff for each changed file (up to 5 files, excluding search indices).

@github-actions
Copy link
Copy Markdown
Contributor

Documentation Changes Detected

📄 olmo3/index.html
--- site-base/olmo3/index.html	2026-02-19 23:26:13.329244361 +0000
+++ site-pr/olmo3/index.html	2026-02-19 23:26:11.062189052 +0000
@@ -1050,17 +1050,20 @@
 <li><a href="https://huggingface.co/allenai/Olmo-3-32B-Think">32b</a>: Training data tokenized with the <code>olmo_thinker_no_think_sft_tokenization</code> chat template (otherwise identical, doesn't have olmo identity in the prompt), released with that chat template + the think token in <code>add_generation_prompt</code>.</li>
 <li>Reason for the difference between 7b and 32b: we learned as we went to not have the identity baked into the prompt (so it was easier to fix at the time of the demo in the form of a system prompt) but couldn't afford to retrain 7b thinking model at that point.</li>
 </ul>
-<p><strong>Olmo 3.X and future models:</strong></p>
+<p><strong>Olmo 3.2+ models:</strong></p>
 <ul>
-<li>Tokenized with the same Instruct chat template  <a href="https://huggingface.co/allenai/olmo-3-tokenizer-instruct-dev"><code>allenai/olmo-3-tokenizer-instruct-dev</code></a> (since no <think>, which is our hack for tokenization to make sure the model learns how to generate <think> and it isn't masked out), ideally evaluated with a chat template <a href="https://huggingface.co/allenai/dolma2-tokenizer-special-tokens-reasoner-hybrid">here</a> that is the instruct chat template + think tokens (because new models should combine the tool use abilities from instruct chat template with the <think> for thinking models). Final models, if successfully trained, should likely release with <a href="https://huggingface.co/allenai/dolma2-tokenizer-special-tokens-v5-lc-reasoner">this</a> which is the same as the evaluated one but with olmo identity.</li>

Showing first 10 lines of diff for each changed file (up to 5 files, excluding search indices).

- Add scripts/utils/diff_tokenizers.py for comparing HF tokenizer
  repos file-by-file with pretty-printed chat_template diffs
- Reference the tool in the docs
- Verified all four tokenizer pairs: dev vs release differs only
  in the system prompt as expected

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@github-actions
Copy link
Copy Markdown
Contributor

Documentation Changes Detected

📄 olmo3/index.html
--- site-base/olmo3/index.html	2026-02-19 23:28:47.511321259 +0000
+++ site-pr/olmo3/index.html	2026-02-19 23:28:45.242829554 +0000
@@ -1050,18 +1050,24 @@
 <li><a href="https://huggingface.co/allenai/Olmo-3-32B-Think">32b</a>: Training data tokenized with the <code>olmo_thinker_no_think_sft_tokenization</code> chat template (otherwise identical, doesn't have olmo identity in the prompt), released with that chat template + the think token in <code>add_generation_prompt</code>.</li>
 <li>Reason for the difference between 7b and 32b: we learned as we went to not have the identity baked into the prompt (so it was easier to fix at the time of the demo in the form of a system prompt) but couldn't afford to retrain 7b thinking model at that point.</li>
 </ul>
-<p><strong>Olmo 3.X and future models:</strong></p>
+<p><strong>Olmo 3.2+ models:</strong></p>
 <ul>
-<li>Tokenized with the same Instruct chat template  <a href="https://huggingface.co/allenai/olmo-3-tokenizer-instruct-dev"><code>allenai/olmo-3-tokenizer-instruct-dev</code></a> (since no <think>, which is our hack for tokenization to make sure the model learns how to generate <think> and it isn't masked out), ideally evaluated with a chat template <a href="https://huggingface.co/allenai/dolma2-tokenizer-special-tokens-reasoner-hybrid">here</a> that is the instruct chat template + think tokens (because new models should combine the tool use abilities from instruct chat template with the <think> for thinking models). Final models, if successfully trained, should likely release with <a href="https://huggingface.co/allenai/dolma2-tokenizer-special-tokens-v5-lc-reasoner">this</a> which is the same as the evaluated one but with olmo identity.</li>

Showing first 10 lines of diff for each changed file (up to 5 files, excluding search indices).

natolambert and others added 2 commits February 19, 2026 15:29
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- export_chat_template.py: export chat template to Jinja file
- test_chat_templates.py: render canned examples with a chosen template
- visualize_tokenization.py: visualize SFT tokenization masking
- render_chat_template_examples.py: render chat template outputs for debugging

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@github-actions
Copy link
Copy Markdown
Contributor

Documentation Changes Detected

📄 olmo3/index.html
--- site-base/olmo3/index.html	2026-02-19 23:30:32.448679075 +0000
+++ site-pr/olmo3/index.html	2026-02-19 23:30:29.626668999 +0000
@@ -1050,18 +1050,24 @@
 <li><a href="https://huggingface.co/allenai/Olmo-3-32B-Think">32b</a>: Training data tokenized with the <code>olmo_thinker_no_think_sft_tokenization</code> chat template (otherwise identical, doesn't have olmo identity in the prompt), released with that chat template + the think token in <code>add_generation_prompt</code>.</li>
 <li>Reason for the difference between 7b and 32b: we learned as we went to not have the identity baked into the prompt (so it was easier to fix at the time of the demo in the form of a system prompt) but couldn't afford to retrain 7b thinking model at that point.</li>
 </ul>
-<p><strong>Olmo 3.X and future models:</strong></p>
+<p><strong>Olmo 3.2+ models:</strong></p>
 <ul>
-<li>Tokenized with the same Instruct chat template  <a href="https://huggingface.co/allenai/olmo-3-tokenizer-instruct-dev"><code>allenai/olmo-3-tokenizer-instruct-dev</code></a> (since no <think>, which is our hack for tokenization to make sure the model learns how to generate <think> and it isn't masked out), ideally evaluated with a chat template <a href="https://huggingface.co/allenai/dolma2-tokenizer-special-tokens-reasoner-hybrid">here</a> that is the instruct chat template + think tokens (because new models should combine the tool use abilities from instruct chat template with the <think> for thinking models). Final models, if successfully trained, should likely release with <a href="https://huggingface.co/allenai/dolma2-tokenizer-special-tokens-v5-lc-reasoner">this</a> which is the same as the evaluated one but with olmo identity.</li>

Showing first 10 lines of diff for each changed file (up to 5 files, excluding search indices).

@github-actions
Copy link
Copy Markdown
Contributor

Documentation Changes Detected

📄 olmo3/index.html
--- site-base/olmo3/index.html	2026-02-19 23:31:41.231547160 +0000
+++ site-pr/olmo3/index.html	2026-02-19 23:31:38.973893707 +0000
@@ -1050,18 +1050,24 @@
 <li><a href="https://huggingface.co/allenai/Olmo-3-32B-Think">32b</a>: Training data tokenized with the <code>olmo_thinker_no_think_sft_tokenization</code> chat template (otherwise identical, doesn't have olmo identity in the prompt), released with that chat template + the think token in <code>add_generation_prompt</code>.</li>
 <li>Reason for the difference between 7b and 32b: we learned as we went to not have the identity baked into the prompt (so it was easier to fix at the time of the demo in the form of a system prompt) but couldn't afford to retrain 7b thinking model at that point.</li>
 </ul>
-<p><strong>Olmo 3.X and future models:</strong></p>
+<p><strong>Olmo 3.2+ models:</strong></p>
 <ul>
-<li>Tokenized with the same Instruct chat template  <a href="https://huggingface.co/allenai/olmo-3-tokenizer-instruct-dev"><code>allenai/olmo-3-tokenizer-instruct-dev</code></a> (since no <think>, which is our hack for tokenization to make sure the model learns how to generate <think> and it isn't masked out), ideally evaluated with a chat template <a href="https://huggingface.co/allenai/dolma2-tokenizer-special-tokens-reasoner-hybrid">here</a> that is the instruct chat template + think tokens (because new models should combine the tool use abilities from instruct chat template with the <think> for thinking models). Final models, if successfully trained, should likely release with <a href="https://huggingface.co/allenai/dolma2-tokenizer-special-tokens-v5-lc-reasoner">this</a> which is the same as the evaluated one but with olmo identity.</li>

Showing first 10 lines of diff for each changed file (up to 5 files, excluding search indices).

Pretty-print Jinja templates with proper indentation (one statement
per line, nested blocks indented). Skip the unreadable raw
tokenizer_config.json diff and only show the formatted version.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@github-actions
Copy link
Copy Markdown
Contributor

Documentation Changes Detected

📄 olmo3/index.html
--- site-base/olmo3/index.html	2026-02-19 23:39:00.948041784 +0000
+++ site-pr/olmo3/index.html	2026-02-19 23:38:57.103009197 +0000
@@ -1050,18 +1050,24 @@
 <li><a href="https://huggingface.co/allenai/Olmo-3-32B-Think">32b</a>: Training data tokenized with the <code>olmo_thinker_no_think_sft_tokenization</code> chat template (otherwise identical, doesn't have olmo identity in the prompt), released with that chat template + the think token in <code>add_generation_prompt</code>.</li>
 <li>Reason for the difference between 7b and 32b: we learned as we went to not have the identity baked into the prompt (so it was easier to fix at the time of the demo in the form of a system prompt) but couldn't afford to retrain 7b thinking model at that point.</li>
 </ul>
-<p><strong>Olmo 3.X and future models:</strong></p>
+<p><strong>Olmo 3.2+ models:</strong></p>
 <ul>
-<li>Tokenized with the same Instruct chat template  <a href="https://huggingface.co/allenai/olmo-3-tokenizer-instruct-dev"><code>allenai/olmo-3-tokenizer-instruct-dev</code></a> (since no <think>, which is our hack for tokenization to make sure the model learns how to generate <think> and it isn't masked out), ideally evaluated with a chat template <a href="https://huggingface.co/allenai/dolma2-tokenizer-special-tokens-reasoner-hybrid">here</a> that is the instruct chat template + think tokens (because new models should combine the tool use abilities from instruct chat template with the <think> for thinking models). Final models, if successfully trained, should likely release with <a href="https://huggingface.co/allenai/dolma2-tokenizer-special-tokens-v5-lc-reasoner">this</a> which is the same as the evaluated one but with olmo identity.</li>

Showing first 10 lines of diff for each changed file (up to 5 files, excluding search indices).

Add --rev-a and --rev-b flags to diff_tokenizers.py so you can compare
a single repo across commits, e.g.:

  python scripts/tokenizers/diff_tokenizers.py allenai/olmo-3.2-tokenizer-think-dev     --rev-a abc123 --rev-b main

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@github-actions
Copy link
Copy Markdown
Contributor

Documentation Changes Detected

📄 olmo3/index.html
--- site-base/olmo3/index.html	2026-02-19 23:42:25.807092747 +0000
+++ site-pr/olmo3/index.html	2026-02-19 23:42:23.565099839 +0000
@@ -1050,18 +1050,24 @@
 <li><a href="https://huggingface.co/allenai/Olmo-3-32B-Think">32b</a>: Training data tokenized with the <code>olmo_thinker_no_think_sft_tokenization</code> chat template (otherwise identical, doesn't have olmo identity in the prompt), released with that chat template + the think token in <code>add_generation_prompt</code>.</li>
 <li>Reason for the difference between 7b and 32b: we learned as we went to not have the identity baked into the prompt (so it was easier to fix at the time of the demo in the form of a system prompt) but couldn't afford to retrain 7b thinking model at that point.</li>
 </ul>
-<p><strong>Olmo 3.X and future models:</strong></p>
+<p><strong>Olmo 3.2+ models:</strong></p>
 <ul>
-<li>Tokenized with the same Instruct chat template  <a href="https://huggingface.co/allenai/olmo-3-tokenizer-instruct-dev"><code>allenai/olmo-3-tokenizer-instruct-dev</code></a> (since no <think>, which is our hack for tokenization to make sure the model learns how to generate <think> and it isn't masked out), ideally evaluated with a chat template <a href="https://huggingface.co/allenai/dolma2-tokenizer-special-tokens-reasoner-hybrid">here</a> that is the instruct chat template + think tokens (because new models should combine the tool use abilities from instruct chat template with the <think> for thinking models). Final models, if successfully trained, should likely release with <a href="https://huggingface.co/allenai/dolma2-tokenizer-special-tokens-v5-lc-reasoner">this</a> which is the same as the evaluated one but with olmo identity.</li>

Showing first 10 lines of diff for each changed file (up to 5 files, excluding search indices).

@github-actions
Copy link
Copy Markdown
Contributor

Documentation Changes Detected

📄 olmo3/index.html
--- site-base/olmo3/index.html	2026-02-20 00:00:33.157492785 +0000
+++ site-pr/olmo3/index.html	2026-02-20 00:00:30.743513832 +0000
@@ -1050,18 +1050,24 @@
 <li><a href="https://huggingface.co/allenai/Olmo-3-32B-Think">32b</a>: Training data tokenized with the <code>olmo_thinker_no_think_sft_tokenization</code> chat template (otherwise identical, doesn't have olmo identity in the prompt), released with that chat template + the think token in <code>add_generation_prompt</code>.</li>
 <li>Reason for the difference between 7b and 32b: we learned as we went to not have the identity baked into the prompt (so it was easier to fix at the time of the demo in the form of a system prompt) but couldn't afford to retrain 7b thinking model at that point.</li>
 </ul>
-<p><strong>Olmo 3.X and future models:</strong></p>
+<p><strong>Olmo 3.2+ models:</strong></p>
 <ul>
-<li>Tokenized with the same Instruct chat template  <a href="https://huggingface.co/allenai/olmo-3-tokenizer-instruct-dev"><code>allenai/olmo-3-tokenizer-instruct-dev</code></a> (since no <think>, which is our hack for tokenization to make sure the model learns how to generate <think> and it isn't masked out), ideally evaluated with a chat template <a href="https://huggingface.co/allenai/dolma2-tokenizer-special-tokens-reasoner-hybrid">here</a> that is the instruct chat template + think tokens (because new models should combine the tool use abilities from instruct chat template with the <think> for thinking models). Final models, if successfully trained, should likely release with <a href="https://huggingface.co/allenai/dolma2-tokenizer-special-tokens-v5-lc-reasoner">this</a> which is the same as the evaluated one but with olmo identity.</li>

Showing first 10 lines of diff for each changed file (up to 5 files, excluding search indices).

- Add allow_patterns filter so snapshot_download only fetches tokenizer
  files, not model weights (safe to run on full model repos)
- Add chat_template.jinja to downloaded files and pretty-print its diff

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@github-actions
Copy link
Copy Markdown
Contributor

Documentation Changes Detected

📄 olmo3/index.html
--- site-base/olmo3/index.html	2026-02-20 00:33:44.360244856 +0000
+++ site-pr/olmo3/index.html	2026-02-20 00:33:41.961240014 +0000
@@ -1050,18 +1050,24 @@
 <li><a href="https://huggingface.co/allenai/Olmo-3-32B-Think">32b</a>: Training data tokenized with the <code>olmo_thinker_no_think_sft_tokenization</code> chat template (otherwise identical, doesn't have olmo identity in the prompt), released with that chat template + the think token in <code>add_generation_prompt</code>.</li>
 <li>Reason for the difference between 7b and 32b: we learned as we went to not have the identity baked into the prompt (so it was easier to fix at the time of the demo in the form of a system prompt) but couldn't afford to retrain 7b thinking model at that point.</li>
 </ul>
-<p><strong>Olmo 3.X and future models:</strong></p>
+<p><strong>Olmo 3.2+ models:</strong></p>
 <ul>
-<li>Tokenized with the same Instruct chat template  <a href="https://huggingface.co/allenai/olmo-3-tokenizer-instruct-dev"><code>allenai/olmo-3-tokenizer-instruct-dev</code></a> (since no <think>, which is our hack for tokenization to make sure the model learns how to generate <think> and it isn't masked out), ideally evaluated with a chat template <a href="https://huggingface.co/allenai/dolma2-tokenizer-special-tokens-reasoner-hybrid">here</a> that is the instruct chat template + think tokens (because new models should combine the tool use abilities from instruct chat template with the <think> for thinking models). Final models, if successfully trained, should likely release with <a href="https://huggingface.co/allenai/dolma2-tokenizer-special-tokens-v5-lc-reasoner">this</a> which is the same as the evaluated one but with olmo identity.</li>

Showing first 10 lines of diff for each changed file (up to 5 files, excluding search indices).

When both exist in a HF repo, transformers loads the .jinja file.
Documented this to avoid subtle mismatches.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@github-actions
Copy link
Copy Markdown
Contributor

Documentation Changes Detected

📄 olmo3/index.html
--- site-base/olmo3/index.html	2026-02-20 00:39:17.962956638 +0000
+++ site-pr/olmo3/index.html	2026-02-20 00:39:12.045932059 +0000
@@ -1050,18 +1050,26 @@
 <li><a href="https://huggingface.co/allenai/Olmo-3-32B-Think">32b</a>: Training data tokenized with the <code>olmo_thinker_no_think_sft_tokenization</code> chat template (otherwise identical, doesn't have olmo identity in the prompt), released with that chat template + the think token in <code>add_generation_prompt</code>.</li>
 <li>Reason for the difference between 7b and 32b: we learned as we went to not have the identity baked into the prompt (so it was easier to fix at the time of the demo in the form of a system prompt) but couldn't afford to retrain 7b thinking model at that point.</li>
 </ul>
-<p><strong>Olmo 3.X and future models:</strong></p>
+<p><strong>Olmo 3.2+ models:</strong></p>
 <ul>
-<li>Tokenized with the same Instruct chat template  <a href="https://huggingface.co/allenai/olmo-3-tokenizer-instruct-dev"><code>allenai/olmo-3-tokenizer-instruct-dev</code></a> (since no <think>, which is our hack for tokenization to make sure the model learns how to generate <think> and it isn't masked out), ideally evaluated with a chat template <a href="https://huggingface.co/allenai/dolma2-tokenizer-special-tokens-reasoner-hybrid">here</a> that is the instruct chat template + think tokens (because new models should combine the tool use abilities from instruct chat template with the <think> for thinking models). Final models, if successfully trained, should likely release with <a href="https://huggingface.co/allenai/dolma2-tokenizer-special-tokens-v5-lc-reasoner">this</a> which is the same as the evaluated one but with olmo identity.</li>

Showing first 10 lines of diff for each changed file (up to 5 files, excluding search indices).

Supports loading real HuggingFace dataset rows (streaming, no full
download). Auto-detects SFT (messages key) and DPO (chosen/rejected
keys), rendering each conversation separately. Lazy-imports
open_instruct to avoid torch segfault on shutdown.

Usage:
  uv run python scripts/tokenizers/test_chat_templates.py     --model-name allenai/OLMo-3.2-Hybrid-7B-Instruct-SFT     --dataset allenai/Dolci-Instruct-DPO --row-idx 0

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@github-actions
Copy link
Copy Markdown
Contributor

Documentation Changes Detected

📄 olmo3/index.html
--- site-base/olmo3/index.html	2026-02-20 00:49:21.988481269 +0000
+++ site-pr/olmo3/index.html	2026-02-20 00:49:19.650441805 +0000
@@ -1050,18 +1050,26 @@
 <li><a href="https://huggingface.co/allenai/Olmo-3-32B-Think">32b</a>: Training data tokenized with the <code>olmo_thinker_no_think_sft_tokenization</code> chat template (otherwise identical, doesn't have olmo identity in the prompt), released with that chat template + the think token in <code>add_generation_prompt</code>.</li>
 <li>Reason for the difference between 7b and 32b: we learned as we went to not have the identity baked into the prompt (so it was easier to fix at the time of the demo in the form of a system prompt) but couldn't afford to retrain 7b thinking model at that point.</li>
 </ul>
-<p><strong>Olmo 3.X and future models:</strong></p>
+<p><strong>Olmo 3.2+ models:</strong></p>
 <ul>
-<li>Tokenized with the same Instruct chat template  <a href="https://huggingface.co/allenai/olmo-3-tokenizer-instruct-dev"><code>allenai/olmo-3-tokenizer-instruct-dev</code></a> (since no <think>, which is our hack for tokenization to make sure the model learns how to generate <think> and it isn't masked out), ideally evaluated with a chat template <a href="https://huggingface.co/allenai/dolma2-tokenizer-special-tokens-reasoner-hybrid">here</a> that is the instruct chat template + think tokens (because new models should combine the tool use abilities from instruct chat template with the <think> for thinking models). Final models, if successfully trained, should likely release with <a href="https://huggingface.co/allenai/dolma2-tokenizer-special-tokens-v5-lc-reasoner">this</a> which is the same as the evaluated one but with olmo identity.</li>

Showing first 10 lines of diff for each changed file (up to 5 files, excluding search indices).

Copy link
Copy Markdown
Contributor

@saumyamalik saumyamalik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks great! Thanks for adding this!

@github-actions
Copy link
Copy Markdown
Contributor

Documentation Changes Detected

📄 olmo3/index.html
--- site-base/olmo3/index.html	2026-02-27 18:45:41.696164401 +0000
+++ site-pr/olmo3/index.html	2026-02-27 18:45:38.486383625 +0000
@@ -1050,18 +1050,26 @@
 <li><a href="https://huggingface.co/allenai/Olmo-3-32B-Think">32b</a>: Training data tokenized with the <code>olmo_thinker_no_think_sft_tokenization</code> chat template (otherwise identical, doesn't have olmo identity in the prompt), released with that chat template + the think token in <code>add_generation_prompt</code>.</li>
 <li>Reason for the difference between 7b and 32b: we learned as we went to not have the identity baked into the prompt (so it was easier to fix at the time of the demo in the form of a system prompt) but couldn't afford to retrain 7b thinking model at that point.</li>
 </ul>
-<p><strong>Olmo 3.X and future models:</strong></p>
+<p><strong>Olmo 3.2+ models:</strong></p>
 <ul>
-<li>Tokenized with the same Instruct chat template  <a href="https://huggingface.co/allenai/olmo-3-tokenizer-instruct-dev"><code>allenai/olmo-3-tokenizer-instruct-dev</code></a> (since no <think>, which is our hack for tokenization to make sure the model learns how to generate <think> and it isn't masked out), ideally evaluated with a chat template <a href="https://huggingface.co/allenai/dolma2-tokenizer-special-tokens-reasoner-hybrid">here</a> that is the instruct chat template + think tokens (because new models should combine the tool use abilities from instruct chat template with the <think> for thinking models). Final models, if successfully trained, should likely release with <a href="https://huggingface.co/allenai/dolma2-tokenizer-special-tokens-v5-lc-reasoner">this</a> which is the same as the evaluated one but with olmo identity.</li>

Showing first 10 lines of diff for each changed file (up to 5 files, excluding search indices).

@natolambert natolambert added this pull request to the merge queue Feb 27, 2026
Merged via the queue into main with commit 2f04046 Feb 27, 2026
7 checks passed
@natolambert natolambert deleted the docs-olmo3-tokenizer-cleanup branch February 27, 2026 18:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Morty Proxy This is a proxified and sanitized view of the page, visit original site.