transformers
https://github.com/huggingface/transformers
Python
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported39 Subscribers
View all SubscribersAdd a CodeTriage badge to transformers
Help out
- Issues
- Fix Import Error for Trainer in run_ner.py
- CPU processing is extremely slow for models loaded with `torch_dtype = torch.float16`
- `DataCollatorForMultipleChoice` exists in the docs but not in the package
- add empty cache before load_best_model to prevent cuda OOM
- Bug when using StaticCache in Qwen2.5 Inference
- Confusing error message
- Neftune computation is probably wrong with packed training
- Fix for TypeError in train_new_from_iterator() in tokenization_utils_fast.py
- Clear unused allocated GPU memory when available GPU memory is low.
- FlaxWhisperForConditionalGeneration Out Of Memory Error
- Docs
- Python not yet supported