Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

[Feature Request] Does sllm support multi-node inference? #133

Copy link
Copy link
Open
@Timenumber

Description

@Timenumber
Issue body actions

Prerequisites

  • I have searched existing issues and reviewed documentation.

Problem Description

Hello, I set up an environment with two nodes, each with 1 GPU, following the Multi-Machine Setup Guide. The environment seems to initialize correctly, but when I try to deploy the model with the command:
sllm-cli deploy --model meta-llama/Llama-3.2-1B --num_gpus=2
I receive the following error:
Error: No available node types can fulfill resource request {'CPU': 1.0, 'worker_id_1': 0.1, 'worker_node': 0.1, 'GPU': 2.0}. Add suitable node types to this cluster to resolve this issue.
It seems that any resources request across nodes cannot be fullfilled. Is this error due to a configuration issue or currently sllm doesn't support it? Thank you for your assistance!

Proposed Solution

Multi-node inference.

Alternatives Considered

No response

Additional Context

No response

Importance

Nice to have

Usage Statistics (Optional)

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    QuestionFurther information is requestedFurther information is requested

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      Morty Proxy This is a proxified and sanitized view of the page, visit original site.