vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported3 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- v100 support int4 (gptq or awq), Whether it really work?
- FP8 KV cache doesn't work with prefix caching
- Continuous batching load test limited at 75 VU with 1x3090
- Cupy Import errors in Docker
- What is the meaning of [Avg generation throughput]
- When will the Qwen Lora model be integrated and loaded
- Issues when running mistralai/Mixtral-8x7B-Instruct-v0.1 with vllm:v0.3.2.
- Maximize GPU utilization for increased throughput
- High CPU Usage in Kubernetes with Double T4 Running Zephyr 7b Model
- Does vLLM have a plan to support multi-node connection with cupy in cuda graph mode?
- Docs
- Python not yet supported