KoichiYasuoka commited on
Commit
fbcc304
1 Parent(s): 7cf86dc

align_to_words=False does not work

Browse files
Files changed (1) hide show
  1. README.md +8 -3
README.md CHANGED
@@ -30,11 +30,16 @@ This is a DeBERTa(V2) model pretrained on 青空文庫 for dependency-parsing (h
30
  ## How to Use
31
 
32
  ```py
33
- from transformers import AutoTokenizer,AutoModelForQuestionAnswering,QuestionAnsweringPipeline
 
34
  tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-large-japanese-unidic-ud-head")
35
  model=AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/deberta-large-japanese-unidic-ud-head")
36
- qap=QuestionAnsweringPipeline(tokenizer=tokenizer,model=model,align_to_words=False)
37
- print(qap(question="国語",context="全学年にわたって小学校の国語の教科書に挿し絵が用いられている"))
 
 
 
 
38
  ```
39
 
40
  or
30
  ## How to Use
31
 
32
  ```py
33
+ import torch
34
+ from transformers import AutoTokenizer,AutoModelForQuestionAnswering
35
  tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-large-japanese-unidic-ud-head")
36
  model=AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/deberta-large-japanese-unidic-ud-head")
37
+ question="国語"
38
+ context="全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
39
+ inputs=tokenizer(question,context,return_tensors="pt")
40
+ outputs=model(**inputs)
41
+ start,end=torch.argmax(outputs.start_logits),torch.argmax(outputs.end_logits)
42
+ print(tokenizer.convert_ids_to_tokens(inputs["input_ids"][0,start:end+1]))
43
  ```
44
 
45
  or