text-generation-inference
https://github.com/huggingface/text-generation-inference
Python
Large Language Model Text Generation Inference
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported1 Subscribers
Add a CodeTriage badge to text-generation-inference
Help out
- Issues
- IPEX support FP8 kvcache
- Unable to load ibm-granite/granite-vision-3.2-2b (LlavaNextForConditionalGeneration config mismatch)
- Gaudi: Add Integration Test for Gaudi Backend
- Use ROCM 6.3.1
- Startup error when deploying TGI with AMD backend on versions `>3.1.0-rocm`
- When using --quantize fp8 model hangs and does not response at all with "ERROR: Arch conditional MMA instruction used without targeting appropriate compute capability. Aborting."
- Support for Mistral Small 3.1
- WIP: Add VLM transformers backend
- gemma-3-27b-it runs out of memory during warmup
- Support for priority based queueing in the backend queue
- Docs
- Python not yet supported