Use new vllm/llama-cpp backends for evaluate#1539
Use new vllm/llama-cpp backends for evaluate#1539mergify[bot] merged 2 commits intoinstructlab:maininstructlab/instructlab:mainfrom
Conversation
nathan-weinberg
left a comment
There was a problem hiding this comment.
I would like @alimaredia and @cdoern to signoff but apart from one logging comment LGTM
0167d22 to
d9306a2
Compare
|
Will need to adjust to changes in #1531 |
|
Looks like pylint is catching a dep issue outside this code:
|
cdoern
left a comment
There was a problem hiding this comment.
This is the right structure I think in terms of config. I might do a follow up to make the option overriding work as it does in the other cmds.
The way I set it up for train and @alimaredia did for serve is that in configuration.py the default_map in the context is set to have the right key->value pairs that will correspond to flag names in the specific cmd.
This is a clean solution though too and gets it all working, so it gets my +1 !
leseb
left a comment
There was a problem hiding this comment.
LGTM but I think it's worth a note in the CHANGELOG.md. Thanks!
Signed-off-by: Dan McPherson <dmcphers@redhat.com>
|
macos has been flaky recently, rerunning |
Followup to #1369
This PR is using the new serving backends and support the vllm and llama_cpp path. It replaces temporary code serving vllm directly.
Checklist:
conventional commits.