vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported3 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Performance]: Flashinfer backend's improvement is marginal compared to FlashAttention backend for long context Qwen2-72b-instruct-128k
- [Feature]: support reward model API
- [Installation]: ERROR: No matching distribution found for torch==2.3.1
- [WIP] Fp8 marlin grouped
- [Feature]: 4D Attention Mask
- [Performance]: Multi-node Pipeline Parallel double bandwidth, no change in performance
- [Feature]: LLM2Vec (Fine-Tuned Embeddings) Support
- [Bug]: Intel GPU Test failing in CI
- [Feature]: Implementation of Sliding Window Attention for Full Context Support with Gemma-2
- [Bugfix][Core] Output sampling: heuristic to choose between candidates
- Docs
- Python not yet supported