beomi commited on
Commit
0e81a8e
1 Parent(s): 7f041ae

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -7
README.md CHANGED
@@ -1,15 +1,44 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
3
  language:
4
  - ko
 
 
5
  pipeline_tag: text-generation
6
- tags:
7
- - alpaca
8
- - llama
9
- - KoAlpaca
10
  ---
11
 
12
- # KoAlpaca: Korean Alpaca Model based on Stanford Alpaca (feat. LLAMA and Polyglot-ko)
13
 
14
- - More informations at https://github.com/Beomi/KoAlpaca
15
- - This repository contains finetuned KoAlpaca model weights based on Polyglot-ko(5.8B)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ - polyglot-ko
6
+ - gpt-neox
7
+ - KoAlpaca
8
+ model-index:
9
+ - name: KoAlpaca-Polyglot-5.8B
10
+ results: []
11
  language:
12
  - ko
13
+ datasets:
14
+ - KoAlpaca-v1.1b
15
  pipeline_tag: text-generation
 
 
 
 
16
  ---
17
 
 
18
 
19
+ # KoAlpaca-Polyglot-5.8B (v1.1b)
20
+
21
+ This model is a fine-tuned version of [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) on a KoAlpaca Dataset v1.1b
22
+
23
+ Detail Codes are available at [KoAlpaca Github Repository](https://github.com/Beomi/KoAlpaca)
24
+
25
+ ## Training procedure
26
+
27
+ ### Training hyperparameters
28
+
29
+ The following hyperparameters were used during training:
30
+ - learning_rate: 5e-05
31
+ - train_batch_size: 2
32
+ - eval_batch_size: 8
33
+ - seed: 42
34
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
35
+ - lr_scheduler_type: linear
36
+ - num_epochs: 2.0
37
+ - mixed_precision_training: Native AMP
38
+
39
+ ### Framework versions
40
+
41
+ - Transformers 4.29.0.dev0
42
+ - Pytorch 2.0.0+cu117
43
+ - Datasets 2.10.1
44
+ - Tokenizers 0.13.2