vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported4 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: When enabling LoRA, greedy search got different answers.
- [Bug]: vllm api_server often crashes when the version is higher than 0.5.3.
- [Feature]: Slurm run_cluster.sh launcher instead of just Ray
- [WIP][Spec Decode] Add multi-proposer support for variable and flexible speculative decoding
- [Bug]: Is vllm compatible with torchrun?
- [Bug]: RuntimeError: operator torchvision::nms does not exist
- [Usage]: how do I pass in the JSON content-type for ASYNC Mistral 7B offline inference
- [Usage]: Confirm tool calling is not supported and this is the closest thing can be done
- [Bug]: Requests larger than 75k input tokens cause `Input prompt (512 tokens) is too long and exceeds the capacity of block_manager` error
- [Bug]: Intermitted model load failure with error `Got async event : local catastrophic error` on A100
- Docs
- Python not yet supported