lit-llama
https://github.com/lightning-ai/lit-llama
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
not yet supported0 Subscribers
Add a CodeTriage badge to lit-llama
Help out
- Issues
- Fix: Variable name
- Error: git submodule update --init --recursive -q did not run successfully
- Ban some tokens
- RuntimeError: Expected x1.dtype() == cos.dtype() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
- Can I use Lightning fabirc to pre train llama2 on v100?
- How to quantize LLama in fine-tuning ?
- How to convert hf weight of 70b to lit-lamma weights?
- RuntimeError: probability tensor contains either `inf`, `nan` or element < 0
- Why is LLaMA response to queries in the conversation so wrong?
- [question] nan loss value and run time error
- Docs
- not yet supported