vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported2 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: llama3-405b-fp8 NCCL communication
- [Usage]: How do I configure Phi-3-vision for high throughput?
- [Usage]: alignment between trl and llm.generate
- [Usage]: How to use FP8 or other quantization algorithms for Minicpmv2_6
- [Bug]: vLLM server not supporting stabilityai/stablelm-3b-4e1t model on CPU
- [Usage]: Is there an option to obtain attention matrices during inference, similar to the output_attentions=True parameter in the transformers package?
- [Usage]: About bitsandbytes
- [Usage]: how to abort request?
- [Kernel] (2/N) Machete - Integrate into CompressedTensorsWNA16 and GPTQMarlin
- [Bug]: The MixtralForCausalLM architecture and the mistralai/Mixtral-8x7B-Instruct-v0.1 model are stated to be supported by vLLM, but an error occurs during model loading.
- Docs
- Python not yet supported