beomi commited on
Commit
87e97e9
1 Parent(s): 1d24557

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -20
README.md CHANGED
@@ -1,30 +1,28 @@
1
  ---
2
- license: mit
3
  tags:
4
  - generated_from_trainer
 
 
5
  model-index:
6
  - name: KoRWKV-6B-koalpaca-v1.1a
7
  results: []
 
 
 
 
 
 
8
  ---
9
 
10
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
  should probably proofread and complete it, then remove this comment. -->
12
 
13
- # KoRWKV-6B-koalpaca-v1.1a
14
 
15
- This model is a fine-tuned version of [beomi/KoRWKV-6B](https://huggingface.co/beomi/KoRWKV-6B) on an unknown dataset.
16
 
17
- ## Model description
18
-
19
- More information needed
20
-
21
- ## Intended uses & limitations
22
-
23
- More information needed
24
-
25
- ## Training and evaluation data
26
-
27
- More information needed
28
 
29
  ## Training procedure
30
 
@@ -33,7 +31,6 @@ More information needed
33
  The following hyperparameters were used during training:
34
  - learning_rate: 2e-05
35
  - train_batch_size: 1
36
- - eval_batch_size: 8
37
  - seed: 42
38
  - gradient_accumulation_steps: 8
39
  - total_train_batch_size: 8
@@ -41,14 +38,11 @@ The following hyperparameters were used during training:
41
  - lr_scheduler_type: linear
42
  - num_epochs: 1.0
43
  - mixed_precision_training: Native AMP
44
-
45
- ### Training results
46
-
47
-
48
 
49
  ### Framework versions
50
 
51
  - Transformers 4.29.2
52
  - Pytorch 1.13.1
53
  - Datasets 2.12.0
54
- - Tokenizers 0.13.3
 
1
  ---
2
+ license: apache-2.0
3
  tags:
4
  - generated_from_trainer
5
+ - KoRWKV
6
+ - KoAlpaca
7
  model-index:
8
  - name: KoRWKV-6B-koalpaca-v1.1a
9
  results: []
10
+ datasets:
11
+ - beomi/KoAlpaca-v1.1a
12
+ language:
13
+ - ko
14
+ library_name: transformers
15
+ pipeline_tag: text-generation
16
  ---
17
 
18
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
19
  should probably proofread and complete it, then remove this comment. -->
20
 
21
+ # KoAlpaca-KoRWKV-6B (v1.1a)
22
 
23
+ This model is a fine-tuned version of [beomi/KoRWKV-6B](https://huggingface.co/beomi/KoRWKV-6B) on an [KoAlpaca v1.1a Dataset](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a).
24
 
25
+ Detail Codes are available at [KoAlpaca Github Repository](https://github.com/Beomi/KoAlpaca)
 
 
 
 
 
 
 
 
 
 
26
 
27
  ## Training procedure
28
 
 
31
  The following hyperparameters were used during training:
32
  - learning_rate: 2e-05
33
  - train_batch_size: 1
 
34
  - seed: 42
35
  - gradient_accumulation_steps: 8
36
  - total_train_batch_size: 8
 
38
  - lr_scheduler_type: linear
39
  - num_epochs: 1.0
40
  - mixed_precision_training: Native AMP
41
+ - Trained on 1x H100(80G PCI-E) GPU
 
 
 
42
 
43
  ### Framework versions
44
 
45
  - Transformers 4.29.2
46
  - Pytorch 1.13.1
47
  - Datasets 2.12.0
48
+ - Tokenizers 0.13.3