transformers
https://github.com/huggingface/transformers
Python
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported31 Subscribers
View all SubscribersAdd a CodeTriage badge to transformers
Help out
- Issues
- New `save_strategy` option called "best" to save when a new best performance is achieved.
- Tokenizer discard data that exceed max_length
- OOM when loading 300B models with `AutoModelForCausalLM.from_pretrained` and `BitsAndBytesConfig` quantization.
- Multi-GPU inference affects LLM's (Llama2-7b-chat-hf) generation.
- [WIP] Add implementation of `_extract_fbank_features_batch`
- Allow infer_framework_load_model to use the originally specified config.
- Add argument to set number of eval steps in Trainer
- handle when from_pretrained_id is a list
- Support deepspeed sequence parallel
- fix wav2vec2 with torch.compile
- Docs
- Python not yet supported