Matttttttt commited on
Commit
f5c5e65
1 Parent(s): 37fe18e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -18,7 +18,7 @@ You can use this model as follows:
18
  ```python
19
  from transformers import XLMRobertaTokenizer, MBartForConditionalGeneration
20
  tokenizer = XLMRobertaTokenizer.from_pretrained('ku-nlp/bart-v2-large-japanese')
21
- model = MBartForConditionalGeneration.from_pretrained('ku-nlp/bart-v2-large-japanese/')
22
  sentence = '京都 大学 で 自然 言語 処理 を 専攻 する 。' # input should be segmented into words by Juman++ in advance
23
  encoding = tokenizer(sentence, return_tensors='pt')
24
  ...
 
18
  ```python
19
  from transformers import XLMRobertaTokenizer, MBartForConditionalGeneration
20
  tokenizer = XLMRobertaTokenizer.from_pretrained('ku-nlp/bart-v2-large-japanese')
21
+ model = MBartForConditionalGeneration.from_pretrained('ku-nlp/bart-v2-large-japanese')
22
  sentence = '京都 大学 で 自然 言語 処理 を 専攻 する 。' # input should be segmented into words by Juman++ in advance
23
  encoding = tokenizer(sentence, return_tensors='pt')
24
  ...