vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported4 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Core]: (1/N) Support prefill only models by Workflow Defined Engine - Prefill only scheduler
- [Performance]: phi 3.5 vision model consuming high CPU RAM and the process getting killed
- [Bug]: vLLM OpenAI-api server `/docs` endpoint fails to load
- [Bug]: Extreme low throughput when using pipeline parallelism when Batch Size(running req) is small
- [Bug]: Error Running Qwen2.5-7B-Instruct on CPU
- [Installation]: pip install vllm-0.6.2.zip err:setuptools-scm was unable to detect version for /tmp/pip-req-build-7ptioibj
- [Bug]: qwen2.5 function calling,ChatLanguageModel is ok, but in StreamingChatLanguageModel,the logger report error
- [Bug]: Qwen2.5-Math-7B-Instruct vllm output garbled code, but huggingface not
- [Misc]: CMake Clean-up / Refactor Tasks
- [Bug]: Unable to use --enable-lora on latest vllm docker container (v0.6.2)
- Docs
- Python not yet supported