lit-llama
https://github.com/lightning-ai/lit-llama
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
not yet supported0 Subscribers
Add a CodeTriage badge to lit-llama
Help out
- Issues
- Getting really slow training time
- Missing eos_id=tokenizer.eos_id in the generate function call in generate/full.py
- Floating point exception (core dumped)
- Fix adapter v2 llm.int8 inference
- How to use deepspeed zero-3-offload strategy correctly? (Parameters Duplication Issue)
- No module named 'torch.utils._device'
- resume fine-tuning from an intermediate checkpoint
- Generate with batched inputs
- It is possible to finetune adapter with 2 x RTX3060 12GB
- Export finetuned lora weights to base model
- Docs
- not yet supported