Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit c3c74c2

Browse filesBrowse files
committed
Function: Save log in non steam mode
1 parent e05aa88 commit c3c74c2
Copy full SHA for c3c74c2

File tree

Expand file treeCollapse file tree

1 file changed

+14
-0
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+14
-0
lines changed

‎llama_cpp/server/app.py

Copy file name to clipboardExpand all lines: llama_cpp/server/app.py
+14Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -548,6 +548,20 @@ async def event_publisher(inner_send_chan: MemoryObjectSendStream):
548548
completion: llama_cpp.ChatCompletion = await run_in_threadpool(
549549
llama.create_chat_completion, **kwargs # type: ignore
550550
)
551+
#print(json.dumps(completion,indent=4))
552+
553+
messageRole = ''
554+
messageContent = ''
555+
if 'role' in completion['choices'][0]['message']:
556+
messageRole = completion['choices'][0]['message']['role']
557+
if 'content' in completion['choices'][0]['message']:
558+
messageContent = completion['choices'][0]['message']['content']
559+
log['messages'].append({'role':messageRole, 'content':messageContent})
560+
561+
#print(json.dumps(log,indent=4))
562+
if rediscon is not None:
563+
logstr = json.dumps(log)
564+
rediscon.rpush('llama.cpp', logstr)
551565
return completion
552566

553567

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.