Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit b454f40

Browse filesBrowse files
retr0regabetlen
andauthored
Merge pull request from GHSA-56xg-wfcc-g829
Co-authored-by: Andrei <abetlen@gmail.com>
1 parent 5ab40e6 commit b454f40
Copy full SHA for b454f40

File tree

Expand file treeCollapse file tree

1 file changed

+2
-1
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+2
-1
lines changed

‎llama_cpp/llama_chat_format.py

Copy file name to clipboardExpand all lines: llama_cpp/llama_chat_format.py
+2-1Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@
1111
from typing import Any, Dict, Iterator, List, Literal, Optional, Tuple, Union, Protocol, cast
1212

1313
import jinja2
14+
from jinja2.sandbox import ImmutableSandboxedEnvironment
1415

1516
import numpy as np
1617
import numpy.typing as npt
@@ -191,7 +192,7 @@ def __init__(
191192
self.add_generation_prompt = add_generation_prompt
192193
self.stop_token_ids = set(stop_token_ids) if stop_token_ids is not None else None
193194

194-
self._environment = jinja2.Environment(
195+
self._environment = ImmutableSandboxedEnvironment(
195196
loader=jinja2.BaseLoader(),
196197
trim_blocks=True,
197198
lstrip_blocks=True,

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.