accelerate
https://github.com/huggingface/accelerate
Python
🚀 A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported1 Subscribers
Add a CodeTriage badge to accelerate
Help out
- Issues
- [DO NOT MERGE] add all level buffer support when computing infer_auto_device_map
- make big model inference compatible with torch.compile
- `torch.compile` not working with `device_map` and multiple GPUs
- [feature request] making the use of the `accelerate` launcher optional
- SpanMarker support to the estimator tool
- Why does accelerate's MegatronLMPlugin use its own megatron?
- [feature-request] Add OpenVINO as an inference-only backend
- allocate 80% for cpu is unset in `max_memory`
- Enable cpu offload with weights inside the module
- Accelerate integration with Transformer Engine crashes when using FlashAttention
- Docs
- Python not yet supported