vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported4 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- Adding new cutlass configurations for llama70B
- [Frontend] Add option for LLMEngine to return model hidden states.
- [Bug]: ray + vllm async engine: Background loop is stopped
- [Core] Adding Control Vector Support
- [misc] Optimize speculative decoding
- [plugin] move custom executor to plugins
- [Bug]: vllm0.5.5+Prefix caching RuntimeError: CUDA error: an illegal memory access was encountered
- [Bug]: Cannot use model with shorter context as draft model
- [Bug]: AttributeError: 'RayGPUExecutorAsync' object has no attribute 'forward_dag'
- [Feature]: need no_repeat_n_gram in SamplingParams
- Docs
- Python not yet supported