Qa-L Ora: Q Uantization -Aware Low-Rank Adaptation Of Large Language Models

By Yuhui Xu et al
Read the original document by opening this link in a new tab.

Table of Contents

Abstract
1 Introduction
2 Related Work
3 The Proposed Approach
3.1 Baseline: Low-Rank Adaptation and Low-Bit Quantization
3.2 Objective: Efficient Adaptation and Deployment
3.3 Solution: Group-Wise Quantization with Low-Rank Adaptation
Algorithm 1 QA-LoRA Pseudocode in the PyTorch-like style
The Insight of QA-LoRA: Balance

Summary

Recently years have witnessed a rapid development of large language models (LLMs). Despite the strong ability in many language-understanding tasks, the heavy computational burden largely restricts the application of LLMs especially when one needs to deploy them onto edge devices. In this paper, we propose a quantization-aware low-rank adaptation (QA-LoRA) algorithm. The motivation lies in the imbalanced degrees of freedom of quantization and adaptation, and the solution is to use group-wise operators which increase the degree of freedom of quantization meanwhile decreasing that of adaptation. QA-LoRA is easily implemented with a few lines of code, and it equips the original LoRA with two-fold abilities: (i) during fine-tuning, the LLM's weights are quantized (e.g., into INT4) to reduce time and memory usage; (ii) after fine-tuning, the LLM and auxiliary weights are naturally integrated into a quantized model without loss of accuracy. We apply QA-LoRA to the LLaMA and LLaMA2 model families and validate its effectiveness in different fine-tuning datasets and downstream scenarios.
×
This is where the content will go.