optimum
https://github.com/huggingface/optimum
Python
🚀 Accelerate training and inference of 🤗 Transformers and 🤗 Diffusers with easy to use hardware optimization tools
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported2 Subscribers
Add a CodeTriage badge to optimum
Help out
- Issues
- --dtype fp16 does not decrease the model size
- Uncomment modernbert config
- Support for Exporting Specific Sub-Modules (e.g., Encoder, Decoder)
- Convert Stable Diffusion Inpainting model to FP16 with FP32 inputs
- cannot quantize bge onnx model(embedding model) without performace loss
- Qwen RuntimeError: The serialized model is larger than the 2GiB
- KeyError: 'swinv2 model type is not supported yet in NormalizedConfig.
- Support for ONNX export of UMT5
- VisionEncoderDecoderModel ONNX Conversion - Swinv2-Xlm-roberta-base
- Add ONNX export support for TextNet
- Docs
- Python not yet supported