lxe commited on
Commit
8dcad5e
1 Parent(s): 2337c3d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -2,7 +2,9 @@
2
 
3
  [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/lxe/simple-llama-finetuner/blob/master/Simple_LLaMA_FineTuner.ipynb)
4
 
5
- Simple LLaMA FineTuner is a user-friendly interface designed to facilitate fine-tuning the LLaMA-7B language model using peft/LoRA method. With this intuitive UI, you can easily manage your dataset, customize parameters, train, and evaluate the model's inference capabilities.
 
 
6
 
7
  ## Acknowledgements
8
 
@@ -13,7 +15,6 @@ Simple LLaMA FineTuner is a user-friendly interface designed to facilitate fine-
13
 
14
  ## Features
15
 
16
- - Fine-tuning LLaMA-7B on NVIDIA RTX 3090 (or better)
17
  - Simply paste datasets in the UI, separated by double blank lines
18
  - Adjustable parameters for fine-tuning and inference
19
  - Begner-friendly UI with explanations for each parameter
@@ -23,7 +24,7 @@ Simple LLaMA FineTuner is a user-friendly interface designed to facilitate fine-
23
  ### Prerequisites
24
 
25
  - Linux or WSL
26
- - Modern NVIDIA GPU with >24 GB of VRAM (but it might be possible to run with less for smaller sample lengths)
27
 
28
  ### Usage
29
 
 
2
 
3
  [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/lxe/simple-llama-finetuner/blob/master/Simple_LLaMA_FineTuner.ipynb)
4
 
5
+ Simple LLaMA FineTuner is a beginner-friendly interface designed to facilitate fine-tuning the LLaMA-7B language model using peft/LoRA method on commodity NVIDIA GPUs. With small dataset and sample lengths of 256, you can even run this on a regular Colab Tesla T4 isntance.
6
+
7
+ With this intuitive UI, you can easily manage your dataset, customize parameters, train, and evaluate the model's inference capabilities.
8
 
9
  ## Acknowledgements
10
 
 
15
 
16
  ## Features
17
 
 
18
  - Simply paste datasets in the UI, separated by double blank lines
19
  - Adjustable parameters for fine-tuning and inference
20
  - Begner-friendly UI with explanations for each parameter
 
24
  ### Prerequisites
25
 
26
  - Linux or WSL
27
+ - Modern NVIDIA GPU with >16 GB of VRAM (but it might be possible to run with less for smaller sample lengths)
28
 
29
  ### Usage
30