vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported4 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: vLLM with ray backend and enable nsight can't get perf metrics due to connection issue
- [Misc] An Example to Compute the Low-noise Perplexity Estimate for Llama-2 model family.
- [Bug]: FP8 Marlin fallback out of memory regression
- [Bug]: Llama 3 answers starting with <|start_header_id|>assistant<|end_header_id|>
- [Bug]: Phi-3-small-128k-instruct on 1 A100 GPUs - Assertion error: Does not support prefix-enabled attention.
- Request support for the deepseek-gptq version
- [Bug]: gpu-memory-utilization does not pickup enough GPU memory
- [Bug]: install vllm ocurr the building error
- [Bug]: llama3-405b-fp8 NCCL communication
- [Usage]: Is there an option to obtain attention matrices during inference, similar to the output_attentions=True parameter in the transformers package?
- Docs
- Python not yet supported