KoichiYasuoka commited on
Commit
befc0b4
1 Parent(s): 6514e69
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -29,8 +29,9 @@ from transformers import AutoTokenizer,AutoModelForTokenClassification
29
  tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-japanese-unidic-luw-upos")
30
  model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-japanese-unidic-luw-upos")
31
  s="国境の長いトンネルを抜けると雪国であった。"
 
32
  p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
33
- print(list(zip(s,p)))
34
  ```
35
 
36
  or
@@ -41,7 +42,7 @@ nlp=esupar.load("KoichiYasuoka/bert-base-japanese-unidic-luw-upos")
41
  print(nlp("国境の長いトンネルを抜けると雪国であった。"))
42
  ```
43
 
44
- [fugashi](https://pypi.org/project/fugashi) [unidic-lite](https://pypi.org/project/unidic-lite) [pytokenizations](https://pypi.org/project/pytokenizations) are required.
45
 
46
  ## Reference
47
 
 
29
  tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-japanese-unidic-luw-upos")
30
  model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-japanese-unidic-luw-upos")
31
  s="国境の長いトンネルを抜けると雪国であった。"
32
+ t=tokenizer.tokenize(s)
33
  p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
34
+ print(list(zip(t,p)))
35
  ```
36
 
37
  or
 
42
  print(nlp("国境の長いトンネルを抜けると雪国であった。"))
43
  ```
44
 
45
+ [fugashi](https://pypi.org/project/fugashi), [unidic-lite](https://pypi.org/project/unidic-lite) and [pytokenizations](https://pypi.org/project/pytokenizations) are required.
46
 
47
  ## Reference
48