vllm
https://github.com/vllm-project/vllm
Python
A high-throughput and memory-efficient inference and serving engine for LLMs
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported3 Subscribers
Add a CodeTriage badge to vllm
Help out
- Issues
- Add tokenizer_init_kwargs into vllm engine
- Inference based on vllm qwen14B is inconsistent with the original qwen results, and the accuracy will drop significantly
- Can the VLLM framework support Huawei's 910B chip in the later stage?
- Recovery from OOM
- SWA models have not supported prefix caching
- Expert Parallelism with current FusedMoE kernel
- v100 support int4 (gptq or awq), Whether it really work?
- FP8 KV cache doesn't work with prefix caching
- Continuous batching load test limited at 75 VU with 1x3090
- Cupy Import errors in Docker
- Docs
- Python not yet supported