deepspeed
https://github.com/microsoft/deepspeed
Python
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported9 Subscribers
Add a CodeTriage badge to deepspeed
Help out
- Issues
- [BUG] Multi-gpu training is much lower than single gpu (due to additional processes?)
- Why not save frozen params unless: `self.zero_optimization_stage() >= ZeroStageEnum.gradients`?
- [BUG: Whisper model pipeline parallel training] logits and ground truth size mismatch during loss calculation
- Adding DS Feature API in accelerator
- uniform deepspeed overflow check
- [BUG] No `universal_checkpoint_info` in the Accelerate+Deepspeed Checkpoint
- When using pure DeepSpeed ulysses and zero stage 3 to continue pre-training, the loss gap between each GPU is too large.[BUG]
- [BUG] Gradient Accumulation Steps Initialization Bug in Pipeline Parallel Mode
- [BUG] Mis-typed free_blocks
- [BUG] Fail to Resume From Checkpoint with Different GPU Number(Huggingface Trainer + Deepspeed)
- Docs
- Python not yet supported