PEFT
Portuguese
Gustrd commited on
Commit
6f63f1e
1 Parent(s): 4ba8c43

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -1
README.md CHANGED
@@ -1,3 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  library_name: peft
3
  ---
@@ -17,4 +36,4 @@ The following `bitsandbytes` quantization config was used during training:
17
  ### Framework versions
18
 
19
 
20
- - PEFT 0.5.0.dev0
 
1
+ # Cabra: A portuguese finetuned instruction Open-LLaMA
2
+
3
+ LoRA adapter created with the procedures detailed at the GitHub repository: https://github.com/gustrd/cabra .
4
+
5
+ This training was done at 2 epochs using one A4000 at Paperspace.
6
+
7
+ The GGML version was created with llama.cpp "convert-lora-to-ggml.py".
8
+
9
+ ---
10
+ library_name: peft
11
+ license: cc-by-sa-3.0
12
+ datasets:
13
+ - Gustrd/dolly-15k-libretranslate-pt
14
+ language:
15
+ - pt
16
+ ---
17
+
18
+ This LoRA adapter was created following the procedure
19
+
20
  ---
21
  library_name: peft
22
  ---
 
36
  ### Framework versions
37
 
38
 
39
+ - PEFT 0.5.0.dev0