Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

This is a GPT-2 based model that has been trained with Korean Wikipedia dataset.

Since there is no Korean pre-trained model that has been trained with a large dataset like Wikipedia for GPT-2 yet, so I made a decision to train GPT-2 for Korean texts.

It has been trained with Korean Wikipedia dataset (train wikipedia article count: 334420, validation wikipedia article count: 83605).

Yongwoo Jeong, Sep 13th, 2022.

Downloads last month
0