vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported4 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: Continuing a generation is not deterministic
- [Bug]: Missing detection of BFloat16 for CPU ARM
- [Feature]: allow disable CORSMiddleware in openai api_server
- [Bug]: Using "response_format": { "type": "json_object" } with /v1/chat/completions is terminating the engine
- [Bug]: When using the chat interface of the vLLM server, I frequently encounter issues with the server restarting.
- [Misc]: tdqm during beam_search
- [Bug]: 'vllm' object has no attribute 'unified_attention' error - ppc64le docker image
- [Misc]: Wondering why we checkout from a specific commit of Triton
- [Usage]: Does vLLM support co-hosting multiple models on single server?
- [Bug]: “limit-mm-per-prompt” can't be set correctly as in 0.6.5
- Docs
- Python not yet supported