Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 58cd657

Browse filesBrowse files
committed
Update README.md
1 parent db30a2b commit 58cd657
Copy full SHA for 58cd657

File tree

Expand file treeCollapse file tree

1 file changed

+1
-1
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+1
-1
lines changed

‎README.md

Copy file name to clipboardExpand all lines: README.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ This allows you to use ggllm.cpp to inference falcon models with any OpenAI comp
5252
To install the server package and get started:
5353

5454
```bash
55-
python3 -m llama_cpp.server --model models/7B/ggml-model.bin
55+
python3 -m falcon_cpp.server --model models/7B/ggml-model.bin
5656
```
5757

5858
Navigate to [http://localhost:8000/docs](http://localhost:8000/docs) to see the OpenAPI documentation.

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.