vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported4 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- Pytorch hete spec
- [Bug]: initializing multiple LLM classes simultaneously on the same GPU get an error
- [Bug]: ptxas /tmp/tmpxft_002385ca_00000000-11_attention_kernels.compute_50.ptx, line 4986061; error : Feature 'f16 arithemetic and compare instructions' requires .target sm_53 or higher
- [Bug]: VLLM doesn't support LoRa with config `modules_to_save`
- [Feature]: Simple Data Parallelism in vLLM
- [Bug]: Port binding keep failing due to unnecessary code
- [Bug]: AttributeError: '_OpNamespace' '_C_cache_ops' object has no attribute 'reshape_and_cache'
- [Feature]: Enabling MSS for larger number of sequences (>256)
- [Feature]: Support custom `max_mm_tokens`
- [Bug]: quantization does not work with dummy weight format
- Docs
- Python not yet supported