KoichiYasuoka commited on
Commit
36b5b39
1 Parent(s): bb06bb5

model improved

Browse files
Files changed (3) hide show
  1. README.md +1 -1
  2. maker.py +3 -3
  3. pytorch_model.bin +1 -1
README.md CHANGED
@@ -18,7 +18,7 @@ widget:
18
 
19
  ## Model Description
20
 
21
- This is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [roberta-base-thai-char](https://huggingface.co/KoichiYasuoka/roberta-base-thai-char).
22
 
23
  ## How to Use
24
 
 
18
 
19
  ## Model Description
20
 
21
+ This is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [roberta-base-thai-char-upos](https://huggingface.co/KoichiYasuoka/roberta-base-thai-char-upos).
22
 
23
  ## How to Use
24
 
maker.py CHANGED
@@ -1,5 +1,5 @@
1
  #! /usr/bin/python3
2
- src="KoichiYasuoka/roberta-base-thai-char"
3
  tgt="KoichiYasuoka/roberta-base-thai-char-ud-goeswith"
4
  url="https://github.com/KoichiYasuoka/spaCy-Thai"
5
  import os
@@ -47,9 +47,9 @@ trainDS=UDgoeswithDataset("train.conllu",tkz)
47
  devDS=UDgoeswithDataset("dev.conllu",tkz)
48
  testDS=UDgoeswithDataset("test.conllu",tkz)
49
  lid=trainDS(devDS,testDS)
50
- cfg=AutoConfig.from_pretrained(src,num_labels=len(lid),label2id=lid,id2label={i:l for l,i in lid.items()})
51
  arg=TrainingArguments(num_train_epochs=3,per_device_train_batch_size=32,output_dir="/tmp",overwrite_output_dir=True,save_total_limit=2,evaluation_strategy="epoch",learning_rate=5e-05,warmup_ratio=0.1)
52
- trn=Trainer(args=arg,data_collator=DataCollatorForTokenClassification(tkz),model=AutoModelForTokenClassification.from_pretrained(src,config=cfg),train_dataset=trainDS,eval_dataset=devDS)
53
  trn.train()
54
  trn.save_model(tgt)
55
  tkz.save_pretrained(tgt)
 
1
  #! /usr/bin/python3
2
+ src="KoichiYasuoka/roberta-base-thai-char-upos"
3
  tgt="KoichiYasuoka/roberta-base-thai-char-ud-goeswith"
4
  url="https://github.com/KoichiYasuoka/spaCy-Thai"
5
  import os
 
47
  devDS=UDgoeswithDataset("dev.conllu",tkz)
48
  testDS=UDgoeswithDataset("test.conllu",tkz)
49
  lid=trainDS(devDS,testDS)
50
+ cfg=AutoConfig.from_pretrained(src,num_labels=len(lid),label2id=lid,id2label={i:l for l,i in lid.items()},ignore_mismatched_sizes=True)
51
  arg=TrainingArguments(num_train_epochs=3,per_device_train_batch_size=32,output_dir="/tmp",overwrite_output_dir=True,save_total_limit=2,evaluation_strategy="epoch",learning_rate=5e-05,warmup_ratio=0.1)
52
+ trn=Trainer(args=arg,data_collator=DataCollatorForTokenClassification(tkz),model=AutoModelForTokenClassification.from_pretrained(src,config=cfg,ignore_mismatched_sizes=True),train_dataset=trainDS,eval_dataset=devDS)
53
  trn.train()
54
  trn.save_model(tgt)
55
  tkz.save_pretrained(tgt)
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:efb6aa3a121ab9c66cfd1023b68f17d7849ee32a215503b173baf4616a06034a
3
  size 350986481
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3de696ea3ea8c2e5e97d12c6dacd14aeb96c06f688f686e411074ada923a33e
3
  size 350986481