text-generation-inference
https://github.com/huggingface/text-generation-inference
Python
Large Language Model Text Generation Inference
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported1 Subscribers
Add a CodeTriage badge to text-generation-inference
Help out
- Issues
- TGI Server should be installable via pip
- Complexe response format lead the container to run forever on CPU
- PREFIX_CACHING=0 does not disable prefix caching in v2.3.1
- (Prefill) KV Cache Indexing error if started multiple TGI servers concurrently
- Prefix caching causes 2 different responses from the same HTTP call with seed set depending on what machine calls
- TGI does not support FP8 quantized models on ROCm
- Get opentelemetry trace id from request headers instead of creating a new trace
- How do you download a subfile?
- OpenAI Client format + chat template for a single call
- Add AMD gfx110* support
- Docs
- Python not yet supported