KoichiYasuoka commited on
Commit
c31ec02
1 Parent(s): fb5a49f

model improved

Browse files
Files changed (3) hide show
  1. README.md +1 -1
  2. maker.py +3 -3
  3. pytorch_model.bin +1 -1
README.md CHANGED
@@ -18,7 +18,7 @@ widget:
18
 
19
  ## Model Description
20
 
21
- This is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [roberta-base-thai-syllable](https://huggingface.co/KoichiYasuoka/roberta-base-thai-syllable).
22
 
23
  ## How to Use
24
 
 
18
 
19
  ## Model Description
20
 
21
+ This is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [roberta-base-thai-syllable-upos](https://huggingface.co/KoichiYasuoka/roberta-base-thai-syllable-upos).
22
 
23
  ## How to Use
24
 
maker.py CHANGED
@@ -1,5 +1,5 @@
1
  #! /usr/bin/python3
2
- src="KoichiYasuoka/roberta-base-thai-syllable"
3
  tgt="KoichiYasuoka/roberta-base-thai-syllable-ud-goeswith"
4
  url="https://github.com/KoichiYasuoka/spaCy-Thai"
5
  import os
@@ -47,9 +47,9 @@ trainDS=UDgoeswithDataset("train.conllu",tkz)
47
  devDS=UDgoeswithDataset("dev.conllu",tkz)
48
  testDS=UDgoeswithDataset("test.conllu",tkz)
49
  lid=trainDS(devDS,testDS)
50
- cfg=AutoConfig.from_pretrained(src,num_labels=len(lid),label2id=lid,id2label={i:l for l,i in lid.items()})
51
  arg=TrainingArguments(num_train_epochs=3,per_device_train_batch_size=32,output_dir="/tmp",overwrite_output_dir=True,save_total_limit=2,evaluation_strategy="epoch",learning_rate=5e-05,warmup_ratio=0.1)
52
- trn=Trainer(args=arg,data_collator=DataCollatorForTokenClassification(tkz),model=AutoModelForTokenClassification.from_pretrained(src,config=cfg),train_dataset=trainDS,eval_dataset=devDS)
53
  trn.train()
54
  trn.save_model(tgt)
55
  tkz.save_pretrained(tgt)
 
1
  #! /usr/bin/python3
2
+ src="KoichiYasuoka/roberta-base-thai-syllable-upos"
3
  tgt="KoichiYasuoka/roberta-base-thai-syllable-ud-goeswith"
4
  url="https://github.com/KoichiYasuoka/spaCy-Thai"
5
  import os
 
47
  devDS=UDgoeswithDataset("dev.conllu",tkz)
48
  testDS=UDgoeswithDataset("test.conllu",tkz)
49
  lid=trainDS(devDS,testDS)
50
+ cfg=AutoConfig.from_pretrained(src,num_labels=len(lid),label2id=lid,id2label={i:l for l,i in lid.items()},ignore_mismatched_sizes=True)
51
  arg=TrainingArguments(num_train_epochs=3,per_device_train_batch_size=32,output_dir="/tmp",overwrite_output_dir=True,save_total_limit=2,evaluation_strategy="epoch",learning_rate=5e-05,warmup_ratio=0.1)
52
+ trn=Trainer(args=arg,data_collator=DataCollatorForTokenClassification(tkz),model=AutoModelForTokenClassification.from_pretrained(src,config=cfg,ignore_mismatched_sizes=True),train_dataset=trainDS,eval_dataset=devDS)
53
  trn.train()
54
  trn.save_model(tgt)
55
  tkz.save_pretrained(tgt)
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5614b6b4e9336d394455eb9e54aabddea48d6c140c8b665e0f8961d0d5e65dfc
3
  size 393552177
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee961002fe2aede0ff323c639b47ac75eb3ead1ea146df9f9e23e3a360ca96b1
3
  size 393552177