vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported4 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: new beam search implementation ignores stop conditions
- [Bug]: Streaming response fails after one token (0.5.3.post1)
- [Bug]: AsyncLLMEngine stuck on a single too long request
- [Feature]: Improve Logging For Embedding Models
- [RFC]: Make device agnostic for diverse hardware support
- [Usage]: Manually Increasing inference time
- [Bugfix] fix error due to an uninitialized tokenizer when using `skip_tokenizer_init` with `num_scheduler_steps`
- [Bug]: Qwen2.5-72B-Instruct压测出现AsyncLLMEngine has failed, terminating server process
- Questions about the inference performance of the GPTQ model
- [Bug]: 1.0.0.dev placeholder doesn't work with `uv pip install`
- Docs
- Python not yet supported