deepspeed
https://github.com/microsoft/deepspeed
Python
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported9 Subscribers
Add a CodeTriage badge to deepspeed
Help out
- Issues
- multiple runs on same machine, with ctrl+c, all runs are killed
- Question about using Autotuner with ZeRO and tensor parallelism
- deepspeed setup for requiring grads on the input (explainability) without huge increase in memory over all gpus
- Use DS4Sci_EvoformerAttention and torch.util.checkpoint.checkpoint at the same time during training
- [BUG] deepspeed inference for llama3.1 70b for 2 node, each node with 2 gpu
- AssertionError: no sync context manager is incompatible with gradientpartitioning logic of ZeRo stage 3
- [REQUEST] Let ZeRO-offload use CPU and GPU parallelly
- Stage3: Use new torch grad accumulation hooks API
- [BUG] [Fix-Suggested] KeyError in stage_1_and_2.py Due to Optimizer-Model Parameter Mismatch
- [BUG] [Fix-Suggested] Checkpoint Inconsistency When Freezing Model Parameters Before `deepspeed.initialize`
- Docs
- Python not yet supported