vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported3 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Feature]: Dynamic Memory Compression: Retrofitting LLMs for Accelerated Inference
- [Misc]: Test Plan for PyTorch Nightly and other dependent library
- [Feature]: Long context window - Ring Attention with Blockwise Transformers for Near-Infinite Context
- [Feature]: Long context window - LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens
- [Usage]: Where can I check the conditions for OPT model to stop generating new output token?
- [Performance]: Poor performance of vllm on AWQ
- Issues encountered regarding text quality and length when deploying AquilaChat2-34B using vllm[Usage]:
- [Usage]: run fastchat.serve.vllm_worker in WSL ubuntu2204 ,It exited the system
- [Misc]: hidden states using vllm
- [CI] Use sccache in CI.
- Docs
- Python not yet supported