peft
https://github.com/huggingface/peft
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
not yet supported1 Subscribers
Add a CodeTriage badge to peft
Help out
- Issues
- LoraConfig conflict when using `layers_to_transform` in `LlamaModel`
- FIX: Removed duplicate convolution for DoRA
- about run_unsloth_peft.sh
- Key mismatch when trying to load a LORA adapter into an XLORA model
- merge_and_unload docs do not clarify behaviour for quantized base models
- adaption for moe models
- [FEAT] Add support for optimum-quanto
- Support optimum-quanto
- Deprecation: Transformers will no longer support `past_key_values` to be tuples
- Inference with different LoRA adapters in the same batch does not use the correct module_to_save classifier
- Docs
- not yet supported