vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported3 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Feature]: Add Sliding Window support to FlashInfer backend?
- [Feature]: **Feature Request: Gateway for Model to Support Multiple Models Generation in a Given Context**
- [Feature]: Combine pipeline parallelism with speculative decoding
- [Feature]: Reduce LoRA latency via speculative decoding
- [Installation]: What is required for wheels to build?
- [Bug]: Prefix Caching in BlockSpaceManagerV1 and BlockSpaceManagerV2 Increases Time to First Token(TTFT) and Slows Down System
- [Feature]: Support rerank models
- [Performance]: Performance degrades severely with long input
- [Performance]: Mode/flag/option to maximize throughput while allowing large latency?
- Removes duplicate outlines processors
- Docs
- Python not yet supported