transformers
https://github.com/huggingface/transformers
Python
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported35 Subscribers
View all SubscribersAdd a CodeTriage badge to transformers
Help out
- Issues
- feat: allow to use hf-models
- RuntimeError in `_group_tensors_by_device_and_dtype` (torch/optim/optimizer.py) when training with FSDP on N>1 GPUs.
- Translation model M2M100 uses 2 models in cache (from version 4.46.0)
- Pass callbacks kwarg to study.optimize() in run_hp_search_optuna()
- Better error message when loading adapter models with peft dependency missing
- CI fails on few test_training_gradient_checkpointing tests for LLAMA
- Fix skip of test_training_gradient_checkpointing
- AttributeError when accessing .logits from BLIP-2 model output during conversion
- fix: Handle BLIP-2 model output format
- Gemma2: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:7 and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
- Docs
- Python not yet supported