Papers
arxiv:2405.00732

LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical Report

Published on Apr 29
· Featured in Daily Papers on May 3

Abstract

Low Rank Adaptation (LoRA) has emerged as one of the most widely adopted methods for Parameter Efficient Fine-Tuning (PEFT) of Large Language Models (LLMs). LoRA reduces the number of trainable parameters and memory usage while achieving comparable performance to full fine-tuning. We aim to assess the viability of training and serving LLMs fine-tuned with LoRA in real-world applications. First, we measure the quality of LLMs fine-tuned with quantized low rank adapters across 10 base models and 31 tasks for a total of 310 models. We find that 4-bit LoRA fine-tuned models outperform base models by 34 points and GPT-4 by 10 points on average. Second, we investigate the most effective base models for fine-tuning and assess the correlative and predictive capacities of task complexity heuristics in forecasting the outcomes of fine-tuning. Finally, we evaluate the latency and concurrency capabilities of LoRAX, an open-source Multi-LoRA inference server that facilitates the deployment of multiple LoRA fine-tuned models on a single GPU using shared base model weights and dynamic adapter loading. LoRAX powers LoRA Land, a web application that hosts 25 LoRA fine-tuned Mistral-7B LLMs on a single NVIDIA A100 GPU with 80GB memory. LoRA Land highlights the quality and cost-effectiveness of employing multiple specialized LLMs over a single, general-purpose LLM.

Community

Very interesting research, even though it's an advertisement.

Very similar to what Anyscale is offering, with the difference being that Predibase trades slightly cheaper finetuning for slightly higher inference costs.

(Although 8x7 is an outlier here as you are charging double what Anyscale is charging)

Although both offerings are cheaper than local deployment when you factor in the cost of hardware and electricity. In fact, at these prices you're basicly charging only for electricity costs! At least from the UK anyway.

Very nice to have the choice, it's very cheap and looks to be developer friendly.

I'll probably use both, and load balance between the two.

·
Paper author

Thanks for checking out our work and for the thoughtful comparison!

Yes, the AI tuning and deployment scene is pretty dynamic, and there are several excellent options available, each with its unique trade-offs in cost, performance, and ease of use. Our work with "LoRA Land" specifically explores serving fine-tuned models with LoRAX, which is completely open source!

Glad you find the options competitive and friendly for developers. Definitely appreciate hearing more about your experience as you use these tools -- every bit of user feedback helps us improve!

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

This comment has been hidden

I have tried your fine-tuned LoRA with mistral 7B as base, as shown in your repo:
https://huggingface.co/predibase/gsm8k

Here is the math result, it is poor, would you please help explain it?

==== Final Results ====
Average accuracy: 0.371](Subject: abstract_algebra, Accuracy: 0.370
Subject: college_mathematics, Accuracy: 0.340
Subject: elementary_mathematics, Accuracy: 0.376
Subject: high_school_mathematics, Accuracy: 0.289
Subject: high_school_statistics, Accuracy: 0.481
==== Final Results ====
Average accuracy: 0.371

·
Paper author

Thanks for checking out the paper and the adapter! What datasets are you running this on and how are you extracting the score?

·

"Mikelabs" sound's awesume broo :D

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2405.00732 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2405.00732 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2405.00732 in a Space README.md to link it from this page.

Collections including this paper 30