vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported2 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Kernel] (2/N) Machete - Integrate into CompressedTensorsWNA16 and GPTQMarlin
- [Bug]: The MixtralForCausalLM architecture and the mistralai/Mixtral-8x7B-Instruct-v0.1 model are stated to be supported by vLLM, but an error occurs during model loading.
- [Bug]: Unable to use fp8 kv cache with chunked prefill on ampere
- [Model] 1.58bits BitNet Model Support
- [New Model]: MiniCPM-V-2_6-int4
- [Multi-step] Remove redundant CPU to GPU transfer for non-last rank PP/TP
- [Bug]: Using CPU for inference, an error occurred. [Engine iteration timed out. This should never happen! ]
- [RFC]: Enable Memory Tiering for vLLM
- Virtual Office Hours: August 8 and August 21
- [Bug]: when using llama-3.1-70b-instruct for inference, input with large number of tokens(>8k) will result in endless output
- Docs
- Python not yet supported