deepspeed
https://github.com/microsoft/deepspeed
Python
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported9 Subscribers
Add a CodeTriage badge to deepspeed
Help out
- Issues
- Allow launcher to include `--include=node3`, not just `--include=node3:1,2,3,4,5,6,7,8`
- [BUG] Universal Checkpoint Conversion: Resumed Training Behaves as If Model Initialized from Scratch
- Reduce the device bubble introduced by heavy loop synchronization in coalesced fetch/release(z3_leaf_module)
- Update MII tests to support transformers latest
- Discuss about compile config
- How to use both deepspeed framework and tutel framework?
- DeepSpeed windows install errors
- Error when parsing GPUs on a node when only specifying node name `--include=node3` vs `--include=node3:1,2,4`
- Support the parallel conversion from ZeRO checkpoints to FP32/FP16/BF16 param weight
- [BUG] Training batch size is not consistent with train_batch_size
- Docs
- Python not yet supported