Edit model card

KoBART-base-v2

With the addition of chatting data, the model is trained to handle the semantics of sequences longer than KoBART.

from transformers import PreTrainedTokenizerFast, BartModel

tokenizer = PreTrainedTokenizerFast.from_pretrained('hyunwoongko/kobart')
model = BartModel.from_pretrained('hyunwoongko/kobart')

Performance

NSMC

  • acc. : 0.901

hyunwoongko/kobart

  • Added bos/eos post processor
  • Removed token_type_ids
Downloads last month
43,195
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for hyunwoongko/kobart

Finetunes
3 models

Space using hyunwoongko/kobart 1