transformers
https://github.com/huggingface/transformers
Python
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported30 Subscribers
View all SubscribersAdd a CodeTriage badge to transformers
Help out
- Issues
- Skipping cudagraphs for unknown reason
- Data Map Trainer Callback
- Added HHCache class implementing H2O Cache
- enable low-precision pipeline
- New `save_strategy` option called "best" to save when a new best performance is achieved.
- Tokenizer discard data that exceed max_length
- OOM when loading 300B models with `AutoModelForCausalLM.from_pretrained` and `BitsAndBytesConfig` quantization.
- Multi-GPU inference affects LLM's (Llama2-7b-chat-hf) generation.
- [WIP] Add implementation of `_extract_fbank_features_batch`
- Allow infer_framework_load_model to use the originally specified config.
- Docs
- Python not yet supported