transformers
https://github.com/huggingface/transformers
Python
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported35 Subscribers
View all SubscribersAdd a CodeTriage badge to transformers
Help out
- Issues
- Allow compressed-tensors quantized model to be trained
- num_quantizer in EncodecConfig should accept variable codebook size
- Add do_convert_rgb to vit
- Duplicate ZeRo 3 Global Step Checkpoint Saves
- Improve gguf tensor processing
- Add in free_memory passthrough
- Bugfix/inv freq
- [bugfix] incorrect usage of warning_once
- [Feature] Will there be any integration of using Flex-attention (and Paged attention)?
- RoBERTa is not well implemented for tokenizers with pad_token_id != 1
- Docs
- Python not yet supported