Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Conversation

Sundarasan
Copy link

@Sundarasan Sundarasan commented Aug 27, 2023

Related issue: #15 (comment)

Changes:

  • Basic Helm Setup

To install:

  • From repo's root directory issue the following command,
    helm install llamagpt ./deploy/helm/. --set ui.service.type=LoadBalancer

Not addressed in this PR:

@Sundarasan Sundarasan marked this pull request as ready for review August 27, 2023 10:49
@Sundarasan
Copy link
Author

@mayankchhabra Can you please check this PR.

repository: ghcr.io/getumbrel/llama-gpt-api
tag: 1.0.1
pullPolicy: IfNotPresent
affinity: {}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI, you have this empty affinity: {} default, but I don't see it implemented in either of the deployment template specs. It would be great to get that added, maybe tolerations too, so that people can choose to schedule this api component on higher performance nodes if they want to. That's my usecase anyway.
Thanks for adding this BTW.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Morty Proxy This is a proxified and sanitized view of the page, visit original site.