KoichiYasuoka commited on
Commit
03ea4bb
1 Parent(s): 65d08c9

example changed

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -10,7 +10,7 @@ tags:
10
  license: "apache-2.0"
11
  pipeline_tag: "token-classification"
12
  widget:
13
- - text: "子曰學而時習之不亦說乎有朋自遠方來不亦樂乎人不知而不慍不亦君子乎"
14
  ---
15
 
16
  # roberta-classical-chinese-base-sentence-segmentation
@@ -26,7 +26,7 @@ import torch
26
  from transformers import AutoTokenizer,AutoModelForTokenClassification
27
  tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-sentence-segmentation")
28
  model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-sentence-segmentation")
29
- s="子曰學而時習之不亦說乎有朋自遠方來不亦樂乎人不知而不慍不亦君子乎"
30
  p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))[0],dim=2)[0].tolist()[1:-1]]
31
  print("".join(c+"。" if q=="E" or q=="S" else c for c,q in zip(s,p)))
32
  ```
 
10
  license: "apache-2.0"
11
  pipeline_tag: "token-classification"
12
  widget:
13
+ - text: "子曰學而時習之不亦説乎有朋自遠方來不亦樂乎人不知而不慍不亦君子乎"
14
  ---
15
 
16
  # roberta-classical-chinese-base-sentence-segmentation
 
26
  from transformers import AutoTokenizer,AutoModelForTokenClassification
27
  tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-sentence-segmentation")
28
  model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-sentence-segmentation")
29
+ s="子曰學而時習之不亦説乎有朋自遠方來不亦樂乎人不知而不慍不亦君子乎"
30
  p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))[0],dim=2)[0].tolist()[1:-1]]
31
  print("".join(c+"。" if q=="E" or q=="S" else c for c,q in zip(s,p)))
32
  ```