Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit a7bdccf

Browse filesBrowse files
Updated README
1 parent 04a862e commit a7bdccf
Copy full SHA for a7bdccf

File tree

Expand file treeCollapse file tree

1 file changed

+49
-19
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+49
-19
lines changed

‎README.md

Copy file name to clipboardExpand all lines: README.md
+49-19Lines changed: 49 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -729,40 +729,70 @@ docker run --rm -it -p 8000:8000 -v /path/to/models:/models -e MODEL=/models/lla
729729

730730
[Docker on termux (requires root)](https://gist.github.com/FreddieOliveira/efe850df7ff3951cb62d74bd770dce27) is currently the only known way to run this on phones, see [termux support issue](https://github.com/abetlen/llama-cpp-python/issues/389)
731731

732-
# SeKernel_for_LLM
733-
This is a Python module used to create memory in your chat applications.
732+
# 🃏 SeKernel_for_LLM
733+
This is a Python module used to create a semantic kernel in your openai api compatible chat applications.
734+
735+
### 🍬 Features
736+
- In-chat memory
737+
- Internet-search
738+
- Database querying
734739

735740
## ⚙️ How to:
736-
- Clone the repo and import the module into your project. Ensure that it is in the project directory.
737-
- https://github.com/perpendicularai/SeKernel_for_LLM
738-
- ```
741+
- Clone the repo and import the modules into your project. Ensure that it is in the project directory.
742+
- ```python
739743
import kernel
744+
import plugins
745+
746+
### INTERNET-SEARCH ###
747+
- Define search plugin
748+
search_prompt = plugins.searchPlugin(output=question) # If context equals None, use the Chat template. See `kernel.py` for more templates.
749+
750+
- Initialize the kernel
751+
data = kernel.shopTemplate(prompt=prompt, plugin=plugins.defaultPlugin(), context=search_prompt or context=None # Where no context is provided, and so you may assume the AI assistant to not have any awareness of information of events that took place after the date until which it's training data is up until) # See plugins.py module for more plugins
752+
753+
### DATABASE ###
754+
- Using this database plugin
755+
-- Initialize the database plugin
756+
db = plugins.dbConn()
740757

741-
# Initialize the kernel
742-
data = kernel
758+
- Use the database plugin along with the dbChatPlugin
759+
data = kernel.chatTemplate(prompt=prompt, plugin=plugins.dbChatPlugin())
743760

744-
# Create a template
745-
data = [
746-
{"role": "system", "content": """You are an intelligent assistant. You always provide well-reasoned answers that are both correct and helpful."""},
747-
{"role": "user", "content": question},
748-
]
761+
- Excuting the query
762+
db.execute(response)
749763

750-
# Set messages parameter equal to data. Depending on your LLM API defintion, messages may be a different parameter, in this case it is messages, as defined in the OpenAI API definition.
751-
messages = data
764+
- Getting the output
765+
response = db.fetchall()
766+
767+
### LlamaCpp ###
768+
- Parsing the kernel model to LlamaCpp
769+
LlamaCpp
770+
client = Llama(
771+
model_path=kernel.model() # Make sure to add your GGUF model in the kernel module.
772+
)
773+
774+
- Use the kernel and set messages parameter equal to data. Depending on your LLM API defintion, messages may be a different parameter, in this case it is messages, as defined in the OpenAI API definition.
775+
output = client.create_chat_completions(
776+
messages = data
777+
)
752778
```
753779
See [OpenAI](https://platform.openai.com/docs/api-reference/chat/create) API reference for more.
754-
755-
```
756-
# You may then append any new content and/or messages to the kernel
780+
- ```
781+
- You may then append any new content and/or messages to the kernel
757782
data.append(new_message)
758783
```
759784
## 📽️ Short Films
760-
See an example of using the SeKernel_for_LLM with 🦙 [LlamaCpp Python bindings](https://github.com/abetlen/llama-cpp-python)
785+
See examples of using the SeKernel_for_LLM with 🦙 [LlamaCpp Python bindings](https://github.com/abetlen/llama-cpp-python)
761786

787+
#### The square-root of 2
762788
https://github.com/user-attachments/assets/cb48e962-2cba-4672-b4e7-c0f77455bb74
763789

790+
#### Internet search with Google for the price of leggings
791+
https://github.com/user-attachments/assets/fdf5ac16-c8b7-4b39-b91b-d4e2d8d2d888
792+
793+
#### Database query
794+
https://github.com/user-attachments/assets/ad9c87a4-475f-4ca0-a576-109709ca84b0
764795

765-
🍾 I hope this helps someone starting out. Happy prompting!!!
766796

767797
## Low-level API
768798

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.