You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Docker on termux (requires root)](https://gist.github.com/FreddieOliveira/efe850df7ff3951cb62d74bd770dce27) is currently the only known way to run this on phones, see [termux support issue](https://github.com/abetlen/llama-cpp-python/issues/389)
706
707
708
+
# SeKernel_for_LLM
709
+
This is a Python module used to create memory in your chat applications.
710
+
711
+
## ⚙️ How to:
712
+
- Clone the repo and import the module into your project. Ensure that it is in the project directory.
713
+
-```
714
+
import kernel
715
+
716
+
# Initialize the kernel
717
+
data = kernel
718
+
719
+
# Create a template
720
+
data = [
721
+
{"role": "system", "content": """You are an intelligent assistant. You always provide well-reasoned answers that are both correct and helpful."""},
722
+
{"role": "user", "content": question},
723
+
]
724
+
725
+
# Set messages parameter equal to data. Depending on your LLM API defintion, messages may be a different parameter, in this case it is messages, as defined in the OpenAI API definition.
726
+
messages = data
727
+
```
728
+
See [OpenAI](https://platform.openai.com/docs/api-reference/chat/create) API reference for more.
729
+
730
+
```
731
+
# You may then append any new content and/or messages to the kernel
732
+
data.append(new_message)
733
+
```
734
+
## 📽️ Short Films
735
+
See an example of using the SeKernel_for_LLM with 🦙 [LlamaCpp Python bindings](https://github.com/abetlen/llama-cpp-python)
0 commit comments