thkkvui's picture
Update README.md
54ff5d1
metadata
license: mit
language:
  - ja
base_model: xlm-roberta-base
tags:
  - generated_from_trainer
  - massive
  - bert
datasets:
  - AmazonScience/massive
widget:
  - text: 明日の予定を教えて
metrics:
  - accuracy
  - f1
model-index:
  - name: xlm-roberta-base-finetuned-massive
    results:
      - task:
          name: Text Classification
          type: text-classification
        dataset:
          name: massive
          type: massive
          config: ja-JP
          split: validation
          args: ja-JP
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.8327594687653713
          - name: F1
            type: f1
            value: 0.8192120367052886

xlm-roberta-base-finetuned-massive

This model is a fine-tuned version of xlm-roberta-base on the massive dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7539
  • Accuracy: 0.8328
  • F1: 0.8192

Model description

More information needed

Intended uses & limitations

from transformers import pipeline

model_name = "thkkvui/xlm-roberta-base-finetuned-massive"
classifier = pipeline("text-classification", model=model_name)

text = ["今日の天気を教えて", "ニュースある?", "予定をチェックして", "ドル円は?"]

for t in text:
    output = classifier(t)
    print(output)

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.06
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
2.9836 0.69 500 1.6188 0.6257 0.5524
1.4569 1.39 1000 1.0347 0.7575 0.7251
1.0211 2.08 1500 0.8186 0.8205 0.8024
0.7799 2.78 2000 0.7539 0.8328 0.8192

Framework versions

  • Transformers 4.33.2
  • Pytorch 2.0.1
  • Datasets 2.14.5
  • Tokenizers 0.13.3