bzantium commited on
Commit
7c3101d
1 Parent(s): 8df93b8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -97,7 +97,7 @@ python main.py \
97
  | [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) (ours) | 1.3B | 0.7196 | 0.7193 | 0.7204 | 0.7206 |
98
  | [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) (ours) | **3.8B** | **0.7595** | **0.7608** | **0.7638** | **0.7788** |
99
 
100
- <img src="https://user-images.githubusercontent.com/19511788/192074406-7a84034d-4dd4-40f8-be55-28d76c711c89.png" width="800px">
101
 
102
  ### HellaSwag (F1)
103
 
@@ -108,7 +108,7 @@ python main.py \
108
  | [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) (ours) | 1.3B | 0.4013 | 0.3984 | 0.417 | 0.4416 |
109
  | [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) (ours) | **3.8B** | **0.4438** | **0.4786** | **0.4737** | **0.4822** |
110
 
111
- <img src="https://user-images.githubusercontent.com/19511788/192074409-ee8b3b99-7c8f-45eb-a71d-b0347d1b4f14.png" width="800px">
112
 
113
  <p><strong>&dagger;</strong> The model card of this model provides evaluation results for the KOBEST dataset, but when we evaluated the model with the prompts described in the paper, we can't get similar results to it. Therefore, we checked the KOBEST paper and found that the results were similar to the fine-tuning results reported in the paper. Because we evaluated by prompt-based generation without fine-tuning the model, the results provided by the model card for the this model may differ.</p>
114
 
 
97
  | [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) (ours) | 1.3B | 0.7196 | 0.7193 | 0.7204 | 0.7206 |
98
  | [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) (ours) | **3.8B** | **0.7595** | **0.7608** | **0.7638** | **0.7788** |
99
 
100
+ <img src="https://user-images.githubusercontent.com/19511788/192087615-6df69c03-bd8e-4bf4-8539-0c3a336a1a85.png" width="800px">
101
 
102
  ### HellaSwag (F1)
103
 
 
108
  | [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) (ours) | 1.3B | 0.4013 | 0.3984 | 0.417 | 0.4416 |
109
  | [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) (ours) | **3.8B** | **0.4438** | **0.4786** | **0.4737** | **0.4822** |
110
 
111
+ <img src="https://user-images.githubusercontent.com/19511788/192087611-3410babf-98a6-4472-88b6-f6cf3012bb74.png" width="800px">
112
 
113
  <p><strong>&dagger;</strong> The model card of this model provides evaluation results for the KOBEST dataset, but when we evaluated the model with the prompts described in the paper, we can't get similar results to it. Therefore, we checked the KOBEST paper and found that the results were similar to the fine-tuning results reported in the paper. Because we evaluated by prompt-based generation without fine-tuning the model, the results provided by the model card for the this model may differ.</p>
114