KoichiYasuoka commited on
Commit
de4253a
1 Parent(s): 0f48181

dependency-parsing

Browse files
Files changed (1) hide show
  1. README.md +10 -1
README.md CHANGED
@@ -6,6 +6,7 @@ tags:
6
  - "token-classification"
7
  - "pos"
8
  - "wikipedia"
 
9
  datasets:
10
  - "universal_dependencies"
11
  license: "apache-2.0"
@@ -18,7 +19,7 @@ widget:
18
 
19
  ## Model Description
20
 
21
- This is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging, derived from [roberta-base-thai-syllable](https://huggingface.co/KoichiYasuoka/roberta-base-thai-syllable). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
22
 
23
  ## How to Use
24
 
@@ -33,6 +34,14 @@ p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,retur
33
  print(list(zip(t,p)))
34
  ```
35
 
 
 
 
 
 
 
 
 
36
  ## See Also
37
 
38
  [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
 
6
  - "token-classification"
7
  - "pos"
8
  - "wikipedia"
9
+ - "dependency-parsing"
10
  datasets:
11
  - "universal_dependencies"
12
  license: "apache-2.0"
 
19
 
20
  ## Model Description
21
 
22
+ This is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from [roberta-base-thai-syllable](https://huggingface.co/KoichiYasuoka/roberta-base-thai-syllable). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
23
 
24
  ## How to Use
25
 
 
34
  print(list(zip(t,p)))
35
  ```
36
 
37
+ or
38
+
39
+ ```
40
+ import esupar
41
+ nlp=esupar.load("KoichiYasuoka/roberta-base-thai-syllable-upos")
42
+ print(nlp("หลายหัวดีกว่าหัวเดียว"))
43
+ ```
44
+
45
  ## See Also
46
 
47
  [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models