transformers
https://github.com/huggingface/transformers
Python
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported36 Subscribers
View all SubscribersAdd a CodeTriage badge to transformers
Help out
- Issues
- BUG : Modeling nemotron file does not cache key values even though
- possible llama rope implementation issue
- Fix: Enable prefill phase key value caching of nemotron/minitron models
- tokenizer.json modified after tokenizer.save_pretrained of OLMO models
- Fix low memory beam search
- [DO NOT MERGE] Testing the new ABI3 tokenizers version.
- High cpu memory usage as bf16 model is auto loaded as fp32
- ValueError: Invalid `cache_implementation` (offloaded).
- draft, run model as compreszed/uncompressed mode
- Add GOT-OCR 2.0 to Transformers
- Docs
- Python not yet supported