vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported3 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Usage]: deploy Llama3.1 405B-Instruct-FP8 with H800 * 8 not work
- [Kernel] Add Fused Layernorm + Dynamic-Per-Token Quant Kernels
- [BugFix][Speculative Decoding] Fixes the generation token numbers with sps
- [Bug]: 8-way tensor parallelism w/ Punica broken on Ubuntu 20.04 (effectively Azure) since v0.5
- [Bug]: multi-GPU inference (tensor_parallel_size=2) fails on Intel GPUs
- [Bug]: flash_attn # prefix-enabled attention case forward code maybe error?
- [Feature]: GLM4 function call is supported ?
- [Bug]: Cannot find any of ['adapter_name_or_path'] in the model's quantization config
- [Bug]: VLLM 0.5.3.post1 [rank0]: RuntimeError: NCCL error: unhandled cuda error (run with NCCL_DEBUG=INFO for details)
- [Usage]: use vllm==0.4.2 to infer qwen2-0.5b model on H800 1*80G,but GPU's computational power utilization is only around 20%
- Docs
- Python not yet supported