You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+9Lines changed: 9 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -105,6 +105,15 @@ Below is a short example demonstrating how to use the high-level API to generate
105
105
}
106
106
```
107
107
108
+
### Adjusting the Context Window
109
+
The context window of the Llama models determines the maximum number of tokens that can be processed at once. By default, this is set to 512 tokens, but can be adjusted based on your requirements.
110
+
111
+
For instance, if you want to work with larger contexts, you can expand the context window by setting the n_ctx parameter when initializing the Llama object:
0 commit comments