wangchanberta-base-wiki-20210520-spm-finetune-qa

Finetuning airesearchth/wangchanberta-base-wiki-20210520-spmd with the training set of iapp_wiki_qa_squad, thaiqa_squad, and nsc_qa (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 newmm words). Benchmarks shared on wandb using validation and test sets of iapp_wiki_qa_squad. Trained with thai2transformers.

Run with:

export MODEL_NAME=airesearchth/wangchanberta-base-wiki-20210520-news-spm
CUDA_LAUNCH_BLOCKING=1 python train_question_answering_lm_finetuning.py \\n  --model_name $MODEL_NAME \\n  --dataset_name chimera_qa \\n  --output_dir $MODEL_NAME-finetune-chimera_qa-model \\n  --log_dir $MODEL_NAME-finetune-chimera_qa-log \\n  --model_max_length 400 \\n  --pad_on_right \\n  --fp16
New

Select AutoNLP in the “Train” menu to fine-tune this model automatically.

Downloads last month
558
Hosted inference API
Question Answering
Examples
Examples
This model can be loaded on the Inference API on-demand.