kimsan0622's picture
Librarian Bot: Add base_model information to model (#2)
2801ebd
|
raw
history blame
3.48 kB
metadata
language:
  - ko
  - en
license: apache-2.0
tags:
  - generated_from_trainer
datasets:
  - >-
    KETI-AIR/aihub_koenzh_food_translation,KETI-AIR/aihub_scitech_translation,KETI-AIR/aihub_scitech20_translation,KETI-AIR/aihub_socialtech20_translation,KETI-AIR/aihub_spoken_language_translation
metrics:
  - bleu
pipeline_tag: translation
widget:
  - text: >-
      translate_ko2en: IBM 왓슨X는 AI 및 데이터 플랫폼이다. 신뢰할 수 있는 데이터, 속도, 거버넌스를 갖고 파운데이션
      모델 및 머신 러닝 기능을 포함한 AI 모델을 학습시키고, 조정해, 조직 전체에서 활용하기 위한 전 과정을 아우르는 기술과 서비스를
      제공한다.
    example_title: Sample 1
  - text: >-
      translate_ko2en: 이용자는 신뢰할 수 있고 개방된 환경에서 자신의 데이터에 대해 자체적인 AI를 구축하거나, 시장에
      출시된 AI 모델을 정교하게 조정할 수 있다. 대규모로 활용하기 위한 도구 세트, 기술, 인프라 및 전문 컨설팅 서비스를 활용할 수
      있다.
    example_title: Sample 2
base_model: KETI-AIR/long-ke-t5-base
model-index:
  - name: ko2en
    results:
      - task:
          type: translation
          name: Translation
        dataset:
          name: >-
            KETI-AIR/aihub_koenzh_food_translation,KETI-AIR/aihub_scitech_translation,KETI-AIR/aihub_scitech20_translation,KETI-AIR/aihub_socialtech20_translation,KETI-AIR/aihub_spoken_language_translation
            koen,none,none,none,none
          type: >-
            KETI-AIR/aihub_koenzh_food_translation,KETI-AIR/aihub_scitech_translation,KETI-AIR/aihub_scitech20_translation,KETI-AIR/aihub_socialtech20_translation,KETI-AIR/aihub_spoken_language_translation
          args: koen,none,none,none,none
        metrics:
          - type: bleu
            value: 58.7008
            name: Bleu

ko2en

This model is a fine-tuned version of KETI-AIR/long-ke-t5-base on the KETI-AIR/aihub_koenzh_food_translation,KETI-AIR/aihub_scitech_translation,KETI-AIR/aihub_scitech20_translation,KETI-AIR/aihub_socialtech20_translation,KETI-AIR/aihub_spoken_language_translation koen,none,none,none,none dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5186
  • Bleu: 58.7008
  • Gen Len: 27.0073

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • total_train_batch_size: 128
  • total_eval_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3.0

Training results

Training Loss Epoch Step Validation Loss Bleu Gen Len
0.6234 1.0 93762 0.5843 33.9843 17.5378
0.5334 2.0 187524 0.5369 35.3271 17.5388
0.4704 3.0 281286 0.5186 36.0533 17.5335

Framework versions

  • Transformers 4.25.1
  • Pytorch 1.12.0
  • Datasets 2.8.0
  • Tokenizers 0.13.2