vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported4 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- [Bug]: An EXTREMELY WEIRD bug when I import evaluate before vllm
- [Bug]: "gettid" was not declared error when build from source for cpu with version after v0.6.1
- [Model] Support GGUF models newly added in `transformers` 4.46.0
- [Misc] add a flag max_batch_size_to_capture to reduce vllm's start up time
- [Bugfix] Generate multiple different prompts in benchmark_prefix_caching.py based on --num-prompts
- [Usage]: How do I use langchain for tool calls?
- [Bug]: Input length greater than 32K in nvidia/Llama-3.1-Nemotron-70B-Instruct-HF generate garbage on v0.6.3 ( issue is not seen in v0.6.2)
- [Bug]: "Using Tesla V100 to load the GPTQ-Int4 model results in all output being exclamation marks."
- [Bug]: Incompatible shape in block table when running Phi-3.5-mini-instruct
- [Bug]: Fused Moe pytest is failing with large number of experts
- Docs
- Python not yet supported