vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported2 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: Error Running DeepSeek-v2-Lite w/ FP8
- [Performance]: use Python array to replace Python list for zero-copy tensor creation
- [Bug]: distributed inference for vl model crashed(so slow that the connection closed)
- [Bug]: Vllm api server does not receive supported parameter `truncate_prompt_tokens`
- [Core] generate from input embeds
- [Bug]: Can't load BNB model
- [ DO NOT MERGE ] grpc openai server prototypes
- [Build] Dockerfile revert to CUDA 12.1
- [Bug]: for mistral model, After the optional system message, conversation roles must alternate user/assistant/user/assistant/
- Add required libcuda.so
- Docs
- Python not yet supported