lit-llama
https://github.com/lightning-ai/lit-llama
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
not yet supported0 Subscribers
Add a CodeTriage badge to lit-llama
Help out
- Issues
- Error with verify option when using convert_hf_checkpoint.py
- Adapter small fix
- Use of left padding
- Apply LoRA to more Linear layers
- Unable to run generate or convert the Llama 7B model on M1 macbook
- Unable to run inference on multiple GPUs
- Be able to set custom precision for MPS accelerators
- Sequence classification with Lit-LLaMA
- Try model partitioning for quantized inference
- Further clarify licensing in README
- Docs
- not yet supported