Chahnwoo commited on
Commit
f5b210a
1 Parent(s): cd4ed2b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -0
README.md CHANGED
@@ -37,9 +37,15 @@ MistralAI 7B model fine-tuned for 1 epoch on Dataricks instruction tuning datase
37
  - Quantized Low-Rank Adaptation (QLoRA)
38
  - Transformers Trainer
39
  - DataCollatorForSeq2Seq
 
40
 
41
  #### Preprocessing
42
 
43
  Manually created tokenized 'labels' for the dataset.
 
44
 
 
45
 
 
 
 
 
37
  - Quantized Low-Rank Adaptation (QLoRA)
38
  - Transformers Trainer
39
  - DataCollatorForSeq2Seq
40
+ - Distributed Data Parallel (DDP) across two GPUs
41
 
42
  #### Preprocessing
43
 
44
  Manually created tokenized 'labels' for the dataset.
45
+ Prompt template utilized basic template for instruction-tuning
46
 
47
+ ### Hardware
48
 
49
+ Performed fine-tuning with 2 * A100 GPUs
50
+ - Provided by Gnewsoft during work period
51
+ Model and dataset are too large for free run sessions on Google Colab