deepspeed
https://github.com/microsoft/deepspeed
Python
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported11 Subscribers
Add a CodeTriage badge to deepspeed
Help out
- Issues
- [BUG] grad_norm and loss is nan when deepspeed==0.13.5 but ok with deepspeed==0.10.2
- OOM while llama2-70B SFT
- Fintune part of a whole embeding parameters.
- [BUG]deepspeed+llama factory realizes the case of connection interruption in single multi-card fine-tuning and the need for amazing video memory
- apply reduce_scatter_coalesced op
- [BUG] unable to use a hostfile with a name that is not "hostfile"
- [BUG] inconsistent optimizer naming and defaults
- [BUG] Deepspeed Crashes when using MoE, Stage 2 Offload with DeepSpeedCPUAdam
- [BUG] ValueError: `.to` is not supported for `4-bit` or `8-bit` models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.
- [BUG] Memory leak during autograd backwards
- Docs
- Python not yet supported