vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported2 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: vllm.engine.async_llm_engine.AsyncEngineDeadError: Task finished unexpectedly.
- [Bug]: beam search never ends
- [Usage]: run gguf model need template,how to write?
- [Bug]: When enabling LoRA, greedy search got different answers.
- [Bug]: segfault when loading MoE models
- [New Model]: Could you please help me to support google/madlad400-3b-mt translator model in vLLM?
- [Bug]: An abnormal delay of 300 milliseconds was detected.
- [Bug]: vllm api_server often crashes when the version is higher than 0.5.3.
- [Bug]: def _schedule_running(...) the seqs num of budget not updated
- [Feature]: Gemma 2 models logit softcapping for TPU pallas attention backend
- Docs
- Python not yet supported