Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings
View WangErXiao's full-sized avatar
:octocat:
hi
:octocat:
hi

Block or report WangErXiao

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
WangErXiao/README.md

👋 Hi, I'm Robin! Robin
💻 A programmer
🌱 Currently learning large model technology

💡 "Simplicity is the soul of efficiency." – Austin Freeman


Tech Stack

Java Python Kubernetes Docker Spring MySQL Redis Kafka MyBatis GPT LLM Hugging Face Transformers vLLM

GitHub Statistics

Contact Me

  • Wechat
    WechatID:RoYaoRoYao

Pinned Loading

  1. vllm-project/vllm vllm-project/vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 66.2k 12.2k

  2. vllm-project/llm-compressor vllm-project/llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    Python 2.5k 337

Morty Proxy This is a proxified and sanitized view of the page, visit original site.