vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported3 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Usage]: low GPU usage in qwen1.5 110b int4 inference
- [Feature]: Set tensor_parallel_size to -1, to use all available cuda devices
- [Bug]: RuntimeError: "cat_cuda" not implemented for 'Float8_e4m3fn'
- [Bug]: Load LoRA adaptor for Llama3 seems not working
- add benchmark test for fixed input and output length
- [Usage]: ValueError: User-specified max_model_len (8192) is greater than the derived max_model_len (sliding_window=4096 or model_max_length=None in model's config.json).
- No executable after building vllm from source with CPU support
- [Performance]: empirical measurement of object serialization for input/output of worker
- [Usage]: Gemma2-9b not working on A10G 24gb gpu
- [Bug]: Qwen2 Moe FP8 not supported on L40
- Docs
- Python not yet supported