Lottery Rank-Pruning Adaptation Parameter Efficient Fine-Tuning
Recent studies on parameter-efficient fine-tuning (PEFT) have introduced effective and efficient methods for fine-tuning large language models (LLMs) on downstream tasks using fewer parameters than required by full fine-tuning.Low-rank decomposition adaptation (LoRA) significantly reduces the parameter count to 0.03% of that in full fine-tuning, ma