text-generation-inference
https://github.com/huggingface/text-generation-inference
Python
Large Language Model Text Generation Inference
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported1 Subscribers
Add a CodeTriage badge to text-generation-inference
Help out
- Issues
- [WIP] Add gfx1100 support to AMD pytorch build
- [New Model Request] NVLM
- input tokens exceeded `max_input_tokens`
- TGI drops requests when 150 requests are sent continuously at the rate of 5 Request Per Second in AMD 8 X MI300x with Llama 3.1 405B
- Excessive use of VRAM for Llama 3.1 8B
- [DOCS] Add Google Cloud TGI integration via dedicated DLCs
- huggingface_hub.errors.GenerationError: Request failed during generation: Server error:
- Server error: transport error
- Remove max_stop_sequences by default
- How to turn on the KV cache when serve a model?
- Docs
- Python not yet supported