text-generation-inference
https://github.com/huggingface/text-generation-inference
Python
Large Language Model Text Generation Inference
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported1 Subscribers
Add a CodeTriage badge to text-generation-inference
Help out
- Issues
- Running TGI on NVIDIA T4
- OpenGVLab/InternVL2-8B model support
- Generation kwargs assignment when processing a request
- [Volta] [No flash attention] Dependencies missing for running quantized Llama models in docker
- Allow multi-lora in Messages API
- Improve vlm support (add idefics3 support)
- [Volta] [No flash attention] Llama 3.1 8B Instruct failed to start - "< not supported between instances of 'NoneType' and 'int'"
- Resolve lora loading bug
- Running FP8 and INT4 on multiple AMDs fails with `torch.cuda.OutOfMemoryError`
- PaliGemma detection task is failing
- Docs
- Python not yet supported