PEFT
Portuguese
Gustrd commited on
Commit
2b2fc73
1 Parent(s): 52c20cd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -2
README.md CHANGED
@@ -1,8 +1,21 @@
1
  ---
2
  library_name: peft
 
 
 
 
 
3
  ---
4
- ## Training procedure
5
 
 
 
 
 
 
 
 
 
 
6
 
7
  The following `bitsandbytes` quantization config was used during training:
8
  - quant_method: bitsandbytes
@@ -15,7 +28,8 @@ The following `bitsandbytes` quantization config was used during training:
15
  - bnb_4bit_quant_type: fp4
16
  - bnb_4bit_use_double_quant: False
17
  - bnb_4bit_compute_dtype: float32
 
18
  ### Framework versions
19
 
20
 
21
- - PEFT 0.5.0.dev0
 
1
  ---
2
  library_name: peft
3
+ license: cc-by-3.0
4
+ datasets:
5
+ - Gustrd/dolly-15k-hippo-translated-pt-12k
6
+ language:
7
+ - pt
8
  ---
 
9
 
10
+ ### Cabra: A portuguese finetuned instruction Open-LLaMA
11
+
12
+ LoRA adapter created with the procedures detailed at the GitHub repository: https://github.com/gustrd/cabra .
13
+
14
+ This training was done at 2 epochs using two T4 at Kaggle.
15
+
16
+ This LoRA adapter was created following the procedure:
17
+
18
+ ## Training procedure
19
 
20
  The following `bitsandbytes` quantization config was used during training:
21
  - quant_method: bitsandbytes
 
28
  - bnb_4bit_quant_type: fp4
29
  - bnb_4bit_use_double_quant: False
30
  - bnb_4bit_compute_dtype: float32
31
+
32
  ### Framework versions
33
 
34
 
35
+ - PEFT 0.5.0.dev0