lora
adamo1139 commited on
Commit
7f332a3
1 Parent(s): b0d2a40

Update procedure/tips_and_tricks_for_training_with_qlora_on_cheap_desktop_PC.md

Browse files
procedure/tips_and_tricks_for_training_with_qlora_on_cheap_desktop_PC.md CHANGED
@@ -5,9 +5,9 @@ https://rentry.org/cpu-lora
5
  https://github.com/ggerganov/llama.cpp/pull/2632
6
 
7
  Making your own fine-tune requires a few things to be in place before you start
8
- 1. you need a model that you want to use as a base
9
- 2. you need to get a dataset that will be used to communicate to the model the things you expect it to do
10
- 3. you need to have access to hardware that you will use for training and a training method that will work on this hardware.
11
 
12
  1.
13
  For the model, I know I wanted to go with llama 2 7B model or something that would be based on that to have reasonable coherency and small size.
 
5
  https://github.com/ggerganov/llama.cpp/pull/2632
6
 
7
  Making your own fine-tune requires a few things to be in place before you start
8
+ - you need a model that you want to use as a base
9
+ - you need to get a dataset that will be used to communicate to the model the things you expect it to do
10
+ - you need to have access to hardware that you will use for training and a training method that will work on this hardware.
11
 
12
  1.
13
  For the model, I know I wanted to go with llama 2 7B model or something that would be based on that to have reasonable coherency and small size.