vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported3 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Feature]: GLM4 function call is supported ?
- [Bug]: Cannot find any of ['adapter_name_or_path'] in the model's quantization config
- [Bug]: VLLM 0.5.3.post1 [rank0]: RuntimeError: NCCL error: unhandled cuda error (run with NCCL_DEBUG=INFO for details)
- [Usage]: use vllm==0.4.2 to infer qwen2-0.5b model on H800 1*80G,but GPU's computational power utilization is only around 20%
- [Installation]: ImportError: cannot import name 'LogicalTokenBlock' from 'vllm.block'
- [Bug]:vllm backed with triton server is not working
- [Bug]: RuntimeError: GET was unable to find an engine to execute this computation for llava-next model
- Update logits processor with tensor caching
- [Bug]: temperature=0 does not lead to Greedy Sampling
- [Feature]: vllm support for Ascend NPU
- Docs
- Python not yet supported