You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Docker on termux (requires root)](https://gist.github.com/FreddieOliveira/efe850df7ff3951cb62d74bd770dce27) is currently the only known way to run this on phones, see [termux support issue](https://github.com/abetlen/llama-cpp-python/issues/389)
731
731
732
-
# SeKernel_for_LLM
733
-
This is a Python module used to create memory in your chat applications.
732
+
# 🃏 SeKernel_for_LLM
733
+
This is a Python module used to create a semantic kernel in your openai api compatible chat applications.
734
+
735
+
### 🍬 Features
736
+
- In-chat memory
737
+
- Internet-search
738
+
- Database querying
734
739
735
740
## ⚙️ How to:
736
-
- Clone the repo and import the module into your project. Ensure that it is in the project directory.
- Clone the repo and import the modules into your project. Ensure that it is in the project directory.
742
+
-```python
739
743
import kernel
744
+
import plugins
745
+
746
+
### INTERNET-SEARCH ###
747
+
- Define search plugin
748
+
search_prompt = plugins.searchPlugin(output=question) # If context equals None, use the Chat template. See `kernel.py` for more templates.
749
+
750
+
- Initialize the kernel
751
+
data = kernel.shopTemplate(prompt=prompt, plugin=plugins.defaultPlugin(), context=search_prompt orcontext=None# Where no context is provided, and so you may assume the AI assistant to not have any awareness of information of events that took place after the date until which it's training data is up until) # See plugins.py module for more plugins
752
+
753
+
### DATABASE ###
754
+
- Using this database plugin
755
+
-- Initialize the database plugin
756
+
db= plugins.dbConn()
740
757
741
-
# Initialize the kernel
742
-
data = kernel
758
+
- Use the database plugin along with the dbChatPlugin
{"role": "system", "content": """You are an intelligent assistant. You always provide well-reasoned answers that are both correct and helpful."""},
747
-
{"role": "user", "content": question},
748
-
]
761
+
- Excuting the query
762
+
db.execute(response)
749
763
750
-
# Set messages parameter equal to data. Depending on your LLM API defintion, messages may be a different parameter, in this case it is messages, as defined in the OpenAI API definition.
751
-
messages = data
764
+
- Getting the output
765
+
response= db.fetchall()
766
+
767
+
### LlamaCpp ###
768
+
- Parsing the kernel model to LlamaCpp
769
+
LlamaCpp
770
+
client= Llama(
771
+
model_path=kernel.model() # Make sure to add your GGUF model in the kernel module.
772
+
)
773
+
774
+
- Use the kernel andset messages parameter equal to data. Depending on your LLMAPI defintion, messages may be a different parameter, in this case it is messages, as defined in the OpenAI API definition.
775
+
output= client.create_chat_completions(
776
+
messages= data
777
+
)
752
778
```
753
779
See [OpenAI](https://platform.openai.com/docs/api-reference/chat/create) API reference for more.
754
-
755
-
```
756
-
# You may then append any new content and/or messages to the kernel
780
+
-```
781
+
- You may then append any new content and/or messages to the kernel
757
782
data.append(new_message)
758
783
```
759
784
## 📽️ Short Films
760
-
See an example of using the SeKernel_for_LLM with 🦙 [LlamaCpp Python bindings](https://github.com/abetlen/llama-cpp-python)
785
+
See examples of using the SeKernel_for_LLM with 🦙 [LlamaCpp Python bindings](https://github.com/abetlen/llama-cpp-python)
0 commit comments