lit-llama
https://github.com/lightning-ai/lit-llama
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
not yet supported0 Subscribers
Add a CodeTriage badge to lit-llama
Help out
- Issues
- [question] assert lora_path.is_file() error
- Question about 'validating...' from lora.py
- [question] error message while finetuning
- Mistral Model
- Adapter finetuning do not run on two cards (A100 40G)
- When I finetuned the model, an error occurred during the decoding process: IndexError: Out of range: piece id is out of range.
- Looking for LLaMA 2?
- Only add a linear layer to LLaMA without any computation degrade the performance
- (documentation) How do I know if generate.py is running on GPU / GPU configuration
- multi gpus for full finetune
- Docs
- not yet supported