deepspeed
https://github.com/microsoft/deepspeed
Python
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported9 Subscribers
Add a CodeTriage badge to deepspeed
Help out
- Issues
- Zero Stage-2 Frozen Layers[BUG]
- [BUG] Unexpected caculations at backward pass with ZeRO-Infinity SSD offloading
- Issue with DeepSpeed Inference - Multiple Processes for Model Loading and Memory Allocation
- [PROBLEM] P2p recv waiting for data will cause other threads under the same process to be unable to perform any operations
- [BUG] DeepSpeed distributed training crashes with "TypeError: can't convert complex to float"
- [REQUEST] split zero3 checkpoint files into optim states and master weights
- [REQUEST] [zero_to_fp32.py] Allow user to specify the dtype (e.g. `torch.bfloat16`) in the output file
- [REQUEST] implementing interleaved 1F1B in pipeline parallelism
- [REQUEST] How to use int8 quantization inference without training?
- Exception: Current loss scale already at minimum - cannot decrease scale anymore. Exiting run.
- Docs
- Python not yet supported