Edit model card

factual-consistency-classification-ja

This model is a fine-tuned version of line-corporation/line-distilbert-base-japanese on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6901
  • Accuracy: 0.6855

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 64
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: tpu
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 306 1.0484 0.2773
1.044 2.0 612 1.0016 0.3320
1.044 3.0 918 0.9675 0.3633
0.9847 4.0 1224 0.9380 0.4434
0.9411 5.0 1530 0.9163 0.4863
0.9411 6.0 1836 0.8943 0.5742
0.9091 7.0 2142 0.8852 0.5137
0.9091 8.0 2448 0.8750 0.5039
0.8895 9.0 2754 0.8674 0.4941
0.8736 10.0 3060 0.8529 0.5254
0.8736 11.0 3366 0.8477 0.5195
0.8569 12.0 3672 0.8290 0.6133
0.8569 13.0 3978 0.8231 0.6055
0.8512 14.0 4284 0.8181 0.5879
0.8414 15.0 4590 0.8084 0.6211
0.8414 16.0 4896 0.8050 0.6152
0.8323 17.0 5202 0.8015 0.6016
0.8276 18.0 5508 0.7893 0.6426
0.8276 19.0 5814 0.7889 0.625
0.8248 20.0 6120 0.7829 0.6289
0.8248 21.0 6426 0.7796 0.6211
0.8166 22.0 6732 0.7759 0.6211
0.8107 23.0 7038 0.7720 0.6230
0.8107 24.0 7344 0.7691 0.625
0.8094 25.0 7650 0.7620 0.6426
0.8094 26.0 7956 0.7646 0.6230
0.8055 27.0 8262 0.7533 0.6582
0.8006 28.0 8568 0.7546 0.6348
0.8006 29.0 8874 0.7525 0.6348
0.7987 30.0 9180 0.7493 0.6465
0.7987 31.0 9486 0.7469 0.6484
0.7945 32.0 9792 0.7417 0.6562
0.7949 33.0 10098 0.7412 0.6465
0.7949 34.0 10404 0.7440 0.6367
0.7883 35.0 10710 0.7392 0.6504
0.7874 36.0 11016 0.7316 0.6660
0.7874 37.0 11322 0.7319 0.6543
0.7855 38.0 11628 0.7339 0.6504
0.7855 39.0 11934 0.7299 0.6562
0.7856 40.0 12240 0.7299 0.6504
0.7816 41.0 12546 0.7227 0.6738
0.7816 42.0 12852 0.7275 0.6504
0.7805 43.0 13158 0.7269 0.6543
0.7805 44.0 13464 0.7206 0.6641
0.7756 45.0 13770 0.7175 0.6777
0.7779 46.0 14076 0.7172 0.6660
0.7779 47.0 14382 0.7191 0.6582
0.7778 48.0 14688 0.7145 0.6680
0.7778 49.0 14994 0.7154 0.6602
0.7701 50.0 15300 0.7121 0.6719
0.774 51.0 15606 0.7142 0.6641
0.774 52.0 15912 0.7132 0.6719
0.7732 53.0 16218 0.7078 0.6836
0.768 54.0 16524 0.7123 0.6641
0.768 55.0 16830 0.7048 0.6855
0.7681 56.0 17136 0.7091 0.6641
0.7681 57.0 17442 0.7055 0.6797
0.7685 58.0 17748 0.7047 0.6816
0.7684 59.0 18054 0.7036 0.6836
0.7684 60.0 18360 0.7025 0.6836
0.7633 61.0 18666 0.7042 0.6699
0.7633 62.0 18972 0.7040 0.6699
0.7659 63.0 19278 0.7017 0.6777
0.7647 64.0 19584 0.7003 0.6836
0.7647 65.0 19890 0.7015 0.6738
0.7676 66.0 20196 0.6987 0.6816
0.7607 67.0 20502 0.6972 0.6875
0.7607 68.0 20808 0.6988 0.6777
0.7637 69.0 21114 0.6968 0.6875
0.7637 70.0 21420 0.6968 0.6816
0.7556 71.0 21726 0.6980 0.6738
0.7608 72.0 22032 0.6983 0.6758
0.7608 73.0 22338 0.6967 0.6758
0.7532 74.0 22644 0.6950 0.6816
0.7532 75.0 22950 0.6961 0.6738
0.7592 76.0 23256 0.6949 0.6797
0.7553 77.0 23562 0.6936 0.6836
0.7553 78.0 23868 0.6939 0.6855
0.7581 79.0 24174 0.6937 0.6816
0.7581 80.0 24480 0.6922 0.6836
0.7558 81.0 24786 0.6934 0.6758
0.7581 82.0 25092 0.6922 0.6855
0.7581 83.0 25398 0.6939 0.6738
0.7561 84.0 25704 0.6931 0.6797
0.7581 85.0 26010 0.6914 0.6836
0.7581 86.0 26316 0.6923 0.6797
0.7553 87.0 26622 0.6921 0.6816
0.7553 88.0 26928 0.6923 0.6797
0.7553 89.0 27234 0.6913 0.6816
0.7551 90.0 27540 0.6911 0.6816
0.7551 91.0 27846 0.6920 0.6777
0.7515 92.0 28152 0.6907 0.6797
0.7515 93.0 28458 0.6914 0.6797
0.7574 94.0 28764 0.6912 0.6797
0.7525 95.0 29070 0.6906 0.6836
0.7525 96.0 29376 0.6905 0.6836
0.7539 97.0 29682 0.6899 0.6855
0.7539 98.0 29988 0.6899 0.6855
0.754 99.0 30294 0.6901 0.6855
0.7573 100.0 30600 0.6901 0.6855

Framework versions

  • Transformers 4.34.0
  • Pytorch 2.0.0+cu118
  • Datasets 2.14.5
  • Tokenizers 0.14.0
Downloads last month
6
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for liwii/factual-consistency-classification-ja

Finetuned
(19)
this model