QLoRA: Efficient Finetuning of Quantized LLMs
Paper
•
2305.14314
•
Published
•
58
The focal property of interest is analysis financial documents for numerical reasoning. Specifically numerical reasoning over quarterly financial filings with the SEC. The Llama-3-8B model was chosen to fine tune using the QLoRa approach. This approach was chosen due to the paper's findings of a performance increase while utilizing minimal memory and hardware. The aggressive quantization seemed to significantly decreased training time while offering increased performance on financial analysis.
| ROUGE Score | Base Model | QLoRa Fine Tuned Model |
|---|---|---|
| ROUGE-1 | 0.05104785 | 0.25257307 |
| ROUGE-2 | 0.01158752 | 0.10479990 |
| ROUGE-L | 0.05104785 | 0.25175429 |
Base model
meta-llama/Meta-Llama-3-8B