transformers
https://github.com/huggingface/transformers
Python
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported35 Subscribers
View all SubscribersAdd a CodeTriage badge to transformers
Help out
- Issues
- Support dynamic batch size
- The maximum value of input_ids must be smaller than the embedding layer's input dimension. (TFBartEncoder)
- Switch to sdpa_kernel api with newer torch version
- Fix `past_key_values` as input when using `Cache`
- Support BatchNorm in Hubert pos_conv_emb as in fairseq
- [mask2former] torch.export error for Mask2Former
- ValueError: The model did not return a loss from the inputs, only the following keys: logits. For reference, the inputs it received are input_ids,attention_mask,pixel_values,aspect_ratio_ids,aspect_ratio_mask,cross_attention_mask
- fix(Mask2Former): torch export
- jinja2 is a necessary dependency, but it is not currently specified
- New dynamic cache for sliding window attention
- Docs
- Python not yet supported