Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 6208751

Browse filesBrowse files
committed
Update chat prompt
1 parent 02f9fb8 commit 6208751
Copy full SHA for 6208751

File tree

Expand file treeCollapse file tree

1 file changed

+4
-2
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+4
-2
lines changed

‎llama_cpp/llama.py

Copy file name to clipboardExpand all lines: llama_cpp/llama.py
+4-2Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -696,10 +696,12 @@ def create_chat_completion(
696696
Generated chat completion or a stream of chat completion chunks.
697697
"""
698698
stop = stop if stop is not None else []
699+
chat_history = "".join(
700+
f'### {"Human" if message["role"] == "user" else "Assistant"}:{message["content"]}'
699701
for message in messages
700702
)
701-
PROMPT = f" \n\n### Instructions:{instructions}\n\n### Inputs:{chat_history}\n\n### Response:\nassistant: "
702-
PROMPT_STOP = ["###", "\nuser: ", "\nassistant: ", "\nsystem: "]
703+
PROMPT = chat_history + "### Assistant:"
704+
PROMPT_STOP = ["### Assistant:", "### Human:", "\n"]
703705
completion_or_chunks = self(
704706
prompt=PROMPT,
705707
stop=PROMPT_STOP + stop,

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.