vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported3 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Not for review] Accelerated dag p2p 2
- [Feature]: Add support for interchangable radix attention
- Create CI jobs for Power(ppc64le)
- [Feature]: Add LoRA support for BloomForCausalLM
- [Hardware][Nvidia][Core][Feature] new feature add: vmm(virtual memory manage) kv cache for nvidia gpu
- GPU utilization going down on increasing concurrent request
- [Feature]: Add readiness endpoint /ready and return /health earlier (vLLM on Kubernetes)
- [Bug]: call for stack trace for "Watchdog caught collective operation timeout"
- [Usage]: is there a way to turn off fast attention? a parameter maybe? my model deployment takes 30min to complete
- [Bug]: Garbled Tokens appears in vllm generation result every time change to new LLM model (Qwen)
- Docs
- Python not yet supported