deepspeed
https://github.com/microsoft/deepspeed
Python
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported9 Subscribers
Add a CodeTriage badge to deepspeed
Help out
- Issues
- [BUG] Universal checkpoint conversion - "Cannot find layer_01* files in there"
- [BUG] Multi-node fine-tuning with thunderbolt
- Add DataStates-LLM: Asynchronous Checkpointing Engine Support
- [BUG] I can't run fp8 with pipeline parallel
- [BUG] Multi-gpu stuck when the computation graph is not complete for wach process.
- In distributed training, in order to continue training, an error occurred when loading model checkpoints after saving them.
- [REQUEST] Asynchronous Checkpointing
- [REQUEST] Does Universal Checkpoint supports for MoE Checkpoint?
- Different seeds are giving the exact same loss on Zero 1,2 and 3 during multi gpu training [BUG]
- Issue with LoRA Tuning on llama3-70b using PEFT and TRL's SFTTrainer
- Docs
- Python not yet supported