KoichiYasuoka commited on
Commit
7884611
1 Parent(s): 9e7ad4e

dependency-parsing

Browse files
Files changed (1) hide show
  1. README.md +10 -1
README.md CHANGED
@@ -6,6 +6,7 @@ tags:
6
  - "token-classification"
7
  - "pos"
8
  - "wikipedia"
 
9
  datasets:
10
  - "universal_dependencies"
11
  license: "cc-by-sa-4.0"
@@ -18,7 +19,7 @@ widget:
18
 
19
  ## Model Description
20
 
21
- This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging, derived from [bert-large-japanese-char-extended](https://huggingface.co/KoichiYasuoka/bert-large-japanese-char-extended). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
22
 
23
  ## How to Use
24
 
@@ -32,6 +33,14 @@ p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,retur
32
  print(list(zip(s,p)))
33
  ```
34
 
 
 
 
 
 
 
 
 
35
  ## See Also
36
 
37
  [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
6
  - "token-classification"
7
  - "pos"
8
  - "wikipedia"
9
+ - "dependency-parsing"
10
  datasets:
11
  - "universal_dependencies"
12
  license: "cc-by-sa-4.0"
19
 
20
  ## Model Description
21
 
22
+ This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-large-japanese-char-extended](https://huggingface.co/KoichiYasuoka/bert-large-japanese-char-extended). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
23
 
24
  ## How to Use
25
 
33
  print(list(zip(s,p)))
34
  ```
35
 
36
+ or
37
+
38
+ ```py
39
+ import esupar
40
+ nlp=esupar.load("KoichiYasuoka/bert-large-japanese-luw-upos")
41
+ print(nlp("国境の長いトンネルを抜けると雪国であった。"))
42
+ ```
43
+
44
  ## See Also
45
 
46
  [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models