vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported4 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Usage]: After upgrading vLLM from version 0.5 to 0.6, the FastChat vllm_worker fails to load model weights when used with the openai_server deployment method. This issue may be related to vLLM 0.6 requiring pydantic > 2.9. Is this compatibility issue caused by FastChat 0.2.36 not being updated recently, making it unable to adapt to vLLM 0.6?
- [Installation]: No module named 'vllm._version' from vllm.version import __version__ as VLLM_VERSION
- [Performance]: Maximizing the performance of batch inference of big models on vllm 0.6.3
- [Installation]: Installation instructions for ROCm can be mainlined
- [Installation]: Failed building wheel for aiohttp Failed to build aiohttp ERROR: Could not build wheels for aiohttp, which is required to install pyprojec t.toml-based projects
- [Usage]: Which branch should I use to test speculative decoding
- [Feature]: Add ability to sample a specific prompt log probability
- [Feature]: Option For Automatic Function Calling For CohereForAI/c4ai-command-r-plus-08-2024
- [Bug]: vllm v0.6.2/v0.6.3 is easy to generate random output if there are many symbols(not words) in prompt
- [Performance]: inference with qwen2.5 using version vLLM 0.6.3 is felt to be slower
- Docs
- Python not yet supported