text-generation-inference
https://github.com/huggingface/text-generation-inference
Python
Large Language Model Text Generation Inference
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported1 Subscribers
Add a CodeTriage badge to text-generation-inference
Help out
- Issues
- Documentation about default values of model paramaters
- Add support for Mistral-Nemo
- max_batch_size limit doesn't work well at queue.next_batch()
- Gibberish generated with deepseek-ai/deepseek-coder-6.7b-base
- Phi-3 mini 128k produces gibberish if context >4k tokens
- Error "EOF while parsing an object..." with tool_calls
- Unable to load the local model file into LoRA adaptors
- `top_p` messes up `top_logprobs`
- RuntimeError: FlashAttention only supports Ampere GPUs or newer.
- multiple origins
- Docs
- Python not yet supported