Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 4ff8def

Browse filesBrowse files
bioshazardabetlen
andauthored
abetlen#717: Add support for Huggingface Autotokenizer (abetlen#790)
Co-authored-by: Andrei <abetlen@gmail.com>
1 parent 3580e2c commit 4ff8def
Copy full SHA for 4ff8def

File tree

Expand file treeCollapse file tree

1 file changed

+20
-0
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+20
-0
lines changed

‎llama_cpp/llama_chat_format.py

Copy file name to clipboardExpand all lines: llama_cpp/llama_chat_format.py
+20Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -510,6 +510,26 @@ def format_chatml(
510510
_prompt = _format_chatml(system_message, _messages, _sep)
511511
return ChatFormatterResponse(prompt=_prompt)
512512

513+
# eg, export HF_MODEL=mistralai/Mistral-7B-Instruct-v0.1
514+
@register_chat_format("autotokenizer")
515+
def format_autotokenizer(
516+
messages: List[llama_types.ChatCompletionRequestMessage],
517+
**kwargs: Any,
518+
) -> ChatFormatterResponse:
519+
# https://huggingface.co/docs/transformers/main/chat_templating
520+
# https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1#instruction-format
521+
# https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1/blob/main/tokenizer_config.json
522+
import os
523+
from transformers import AutoTokenizer
524+
huggingFaceModel = os.getenv("HF_MODEL") # eg, mistralai/Mistral-7B-Instruct-v0.1
525+
print(huggingFaceModel)
526+
if not huggingFaceModel:
527+
raise Exception("HF_MODEL needs to be set in env to use chat format 'autotokenizer'")
528+
tokenizer = AutoTokenizer.from_pretrained(huggingFaceModel)
529+
tokenizer.use_default_system_prompt = False
530+
_prompt = tokenizer.apply_chat_template(messages, tokenize=False)
531+
# Return formatted prompt and eos token by default
532+
return ChatFormatterResponse(prompt=_prompt, stop=tokenizer.eos_token)
513533

514534
@register_chat_completion_handler("functionary")
515535
def functionary_chat_handler(

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.