Understanding the Principles of LoRA
Introduction With the continuous expansion of model scale, the feasibility of fine-tuning all parameters of the model (so-called full fine-tuning) is becoming increasingly low. Taking GPT-3 with 175 billion parameters as an example, each new domain requires a complete fine-tuning of a new model, which is very costly! Paper: LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE … Read more