richardr1126 commited on
Commit
cc16d61
1 Parent(s): c2f53a1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -21
README.md CHANGED
@@ -1,9 +1,13 @@
1
  ---
2
  tags:
3
- - generated_from_trainer
 
 
4
  model-index:
5
  - name: sql-guanaco-13b-4
6
  results: []
 
 
7
  ---
8
 
9
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -11,21 +15,9 @@ should probably proofread and complete it, then remove this comment. -->
11
 
12
  # sql-guanaco-13b-4
13
 
14
- This model is a fine-tuned version of [richardr1126/guanaco-13b-merged](https://huggingface.co/richardr1126/guanaco-13b-merged) on an unknown dataset.
15
-
16
- ## Model description
17
-
18
- More information needed
19
-
20
- ## Intended uses & limitations
21
-
22
- More information needed
23
-
24
- ## Training and evaluation data
25
-
26
- More information needed
27
-
28
- ## Training procedure
29
 
30
  ### Training hyperparameters
31
 
@@ -42,13 +34,9 @@ The following hyperparameters were used during training:
42
  - training_steps: 1875
43
  - mixed_precision_training: Native AMP
44
 
45
- ### Training results
46
-
47
-
48
-
49
  ### Framework versions
50
 
51
  - Transformers 4.30.0.dev0
52
  - Pytorch 2.0.1+cu118
53
  - Datasets 2.13.0
54
- - Tokenizers 0.13.3
 
1
  ---
2
  tags:
3
+ - LoRA
4
+ - QLoRa
5
+ - LoRA Adapter
6
  model-index:
7
  - name: sql-guanaco-13b-4
8
  results: []
9
+ datasets:
10
+ - richardr1126/spider-sql_guanaco_style
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
15
 
16
  # sql-guanaco-13b-4
17
 
18
+ This is a LoRA adapter for [richardr1126/guanaco-13b-merged](https://huggingface.co/richardr1126/guanaco-13b-merged), or any other merged guanaco-13b model, fine tuned from LLaMA.
19
+ <br>
20
+ This LoRA was fine-tuned on [richardr1126/sql-create-context_guanaco_style](https://huggingface.co/datasets/richardr1126/sql-create-context_guanaco_style).
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
  ### Training hyperparameters
23
 
 
34
  - training_steps: 1875
35
  - mixed_precision_training: Native AMP
36
 
 
 
 
 
37
  ### Framework versions
38
 
39
  - Transformers 4.30.0.dev0
40
  - Pytorch 2.0.1+cu118
41
  - Datasets 2.13.0
42
+ - Tokenizers 0.13.3