Guoan Wan, Tianyu Chen, Fangzheng Feng, Haoyi Zhou, Runhua Xu
FRoD is a new fine-tuning method that improves convergence speed and expressiveness by using rotational degrees of freedom, achieving full model accuracy with minimal parameters.
Fine-tuning large models for specific tasks can be resource-intensive, but new methods aim to reduce the number of parameters that need updating. However, many of these methods, like LoRA, struggle with slow learning and limited adaptability. The FRoD method introduces a novel approach by using a combination of shared bases and small, learnable changes to improve both speed and accuracy. This allows it to match the performance of fully fine-tuned models while only requiring a small fraction of the parameters to be trained.