transformers
https://github.com/huggingface/transformers
Python
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported39 Subscribers
View all SubscribersAdd a CodeTriage badge to transformers
Help out
- Issues
- A warning message showing that `MultiScaleDeformableAttention.so` is not found in `/root/.cache/torch_extensions` if `ninja` is installed with `transformers`
- SinkCache (StreamLLM) implemented over Post-RoPE Key cache might result in confused position for inference
- Any plans to add AIMv2 in the model?
- Maybe the way SequenceClassification Model calculates the last non-pad token is not reasonable.
- Unclear what happens when using torchrun, multi-gpu and trainer arguments.
- add RAdamScheduleFree optimizer
- unable to convert llama 3.3 weights to hf.py
- InternVL is ExecuTorch Compatible
- DeBERTa's `DisentangledSelfAttention` hardcodes `float` dtype, which causes `bfloat16` overflow error
- AllAboardBertweetModel
- Docs
- Python not yet supported