transformers
https://github.com/huggingface/transformers
Python
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported39 Subscribers
View all SubscribersAdd a CodeTriage badge to transformers
Help out
- Issues
- MultiModalityCausalLM does not support Flash Attention 2.0 yet
- 'do_sample' model default cannot be overridden
- Add support for post-processing kwargs in image-text-to-text pipeline
- Fix: bamba error handling kwargs with forward pass
- Support modernBERT for encoder-decoder models
- Model loaded with `PretrainedModel.from_pretrained` and `with torch.device("cuda"):` decorator leads to unexpected errors compared to `.to("cuda")`
- `inv_freq_expanded` does not move to correct device in Qwen2
- Default value for mean_resizing in resize_token_embeddings should be False
- FineTuning AutoModelForSequenceClassification.from_pretrained(meta-llama/Llama-3.2-1B) Bug:RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument target in method wrapper_CUDA_nll_loss_forward) and awq importing
- [`Mamba2`] Varlen implementation
- Docs
- Python not yet supported