metadata
tags:
- generated_from_trainer
datasets:
- jed351/shikoto_zh_hk
metrics:
- accuracy
model-index:
- name: gpt2-shikoto
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: jed351/shikoto_zh_hk
type: jed351/shikoto_zh_hk
metrics:
- name: Accuracy
type: accuracy
value: 0.37381769930940056
gpt2-shikoto
This model was trained on a dataset I obtained from an online novel site. Please be aware that the stories might contain inappropriate content
The base model can be found here, which was obtained from patching a GPT2 Chinese model and its tokenizer with Cantonese characters.
Training procedure
Please refer to the script provided by Huggingface.
The model was trained for 400,000 steps on 2 NVIDIA Quadro RTX6000 for around 15 hours.
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 40
- total_eval_batch_size: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400000
- mixed_precision_training: Native AMP
Training results
Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.13.2