devngho's picture
correct link
9cf1998 verified
metadata
base_model:
  - nlpai-lab/KoE5
datasets:
  - devngho/ko_llm_annotations
language:
  - ko
library_name: transformers
license: mit
metrics:
  - f1

devngho/ko_edu_classifier_v2_nlpai-lab_KoE5

์ด ๋ชจ๋ธ์˜ ๊ธฐ๋ฐ˜ ๋ชจ๋ธ์€ query: , passage: ์„ ๋ถ™์ด๋„๋ก ํ•™์Šต๋˜์—ˆ์œผ๋ฉฐ, ์ด ๋ชจ๋ธ๋„ passage: ์„ ๋ถ™์ด๋„๋ก ํ•™์Šต๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ž…๋ ฅ ํ…์ŠคํŠธ ์•ž์— ๊ผญ passage: ์„ ์ถ”๊ฐ€ํ•˜์„ธ์š”.

์ด ๋ชจ๋ธ์€ nlpai-lab/KoE5์— classifier๋ฅผ ์ถ”๊ฐ€ํ•œ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. HuggingFaceFW/fineweb-edu-classifier์˜ ํ•œ๊ตญ์–ด ๋ฒ„์ „์„ ๋ชฉํ‘œ๋กœ, ํ•œ๊ตญ์–ด ์›น ํŽ˜์ด์ง€์˜ ๊ต์œก์„ฑ ์ ์ˆ˜๋ฅผ ํ‰๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ํ•™์Šต์—๋Š” blueapple8259/c4-ko-cleaned-2์—์„œ ์ถ”์ถœํ•œ 500k ์ƒ˜ํ”Œ์„ Qwen/Qwen2.5-32B-Instruct๋กœ ํ‰๊ฐ€ํ•œ devngho/ko_llm_annotations ๋ฐ์ดํ„ฐ์…‹์ด ์‚ฌ์šฉ๋˜์—ˆ์Šต๋‹ˆ๋‹ค.

์ด ์—ฐ๊ตฌ๋Š” Google์˜ TPU Research Cloud (TRC)์˜ Cloud TPU ์ œ๊ณต์œผ๋กœ ์ˆ˜ํ–‰๋˜์—ˆ์Šต๋‹ˆ๋‹ค. โšก

์ƒ์„ธ

  • ์ œ์ž‘: devngho
  • ์–ธ์–ด: ko
  • ๋ผ์ด์„ ์Šค: mit
  • ๊ธฐ๋ฐ˜ ๋ชจ๋ธ: nlpai-lab/KoE5

ํ•™์Šต ์ƒ์„ธ

  • learning_rate: 3e-4 (cosine)
  • warmup_ratio: 0.1
  • batch_size: 2048(512*4)
  • optimizer: adamw(b1=0.9, b2=0.98, eps=1e-8, weight_decay=0.01)
  • duration: 8h 12m

ํ•™์Šต ์žฅ๋น„

TPU v4-8

์„ฑ๋Šฅ

Validation Report:
              precision    recall  f1-score   support

           0       0.66      0.33      0.44       198
           1       0.75      0.63      0.68      1553
           2       0.46      0.68      0.55      1159
           3       0.63      0.56      0.59       967
           4       0.62      0.26      0.36       219

    accuracy                           0.59      4096
   macro avg       0.62      0.49      0.52      4096
weighted avg       0.62      0.59      0.59      4096

Confusion Matrix:
[[ 66 116  16   0   0]
 [ 34 977 520  22   0]
 [  0 207 791 159   2]
 [  0  11 382 541  33]
 [  0   0  20 143  56]]

๋‹ค๋ฅธ ์ž‘์€ ๋ชจ๋ธ๋“ค๋ณด๋‹ค๋Š” ๋†’์€ ์„ฑ๋Šฅ์„ ๋ณด์ด์ง€๋งŒ, ํ•œ๊ตญ์–ด ์ž„๋ฒ ๋”ฉ์˜ ํ•œ๊ณ„์™€ qwen2.5 32b ๋ชจ๋ธ์˜ ํ‰๊ฐ€ ํ•œ๊ณ„๋กœ ์„ฑ๋Šฅ์ด ๋‚ฎ์€ ๊ฒƒ์œผ๋กœ ๋ณด์ž…๋‹ˆ๋‹ค. 3 ์ด์ƒ๊ณผ ๋ฏธ๋งŒ์œผ๋กœ ๊ตฌ๋ถ„ํ•  ๋•Œ f1 score๋Š” ์•ฝ 0.72์ž…๋‹ˆ๋‹ค.

devngho/ko_edu_classifier_v2_nlpai-lab_KoE5

This model is based on the model query: , passage: , and this model has also been trained to prepend passage: . Be sure to **prefix passage: before your input text.

This model is nlpai-lab/KoE5 with classfier head. It is designed to evaluate the educational value of Korean web pages, similar to the HuggingFaceFW/fineweb-edu-classifier, but focused on Korean content. The training data comes from devngho/ko_llm_annotations dataset, contains 500k samples extracted from blueapple8259/c4-ko-cleaned-2 and evaluated using Qwen/Qwen2.5-32B-Instruct.

This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).โšก

  • Developed by: devngho
  • Language(s): ko
  • License: mit
  • Base model: nlpai-lab/KoE5

Training detail

  • learning_rate: 3e-4 (cosine)
  • warmup_ratio: 0.1
  • batch_size: 2048(512*4)
  • optimizer: adamw(b1=0.9, b2=0.98, eps=1e-8, weight_decay=0.01)
  • duration: 3h 21m

Training hardware

TPU v4-8

Performance

Validation Report:
              precision    recall  f1-score   support

           0       0.66      0.33      0.44       198
           1       0.75      0.63      0.68      1553
           2       0.46      0.68      0.55      1159
           3       0.63      0.56      0.59       967
           4       0.62      0.26      0.36       219

    accuracy                           0.59      4096
   macro avg       0.62      0.49      0.52      4096
weighted avg       0.62      0.59      0.59      4096

Confusion Matrix:
[[ 66 116  16   0   0]
 [ 34 977 520  22   0]
 [  0 207 791 159   2]
 [  0  11 382 541  33]
 [  0   0  20 143  56]]

The low performance is likely due to the limitations of Korean embeddings and the evaluation limitations of the Qwen2.5 32B model. The F1 score is about 0.72 when separating above and below 3.