Matttttttt commited on
Commit
902b2a4
1 Parent(s): 279848f

fixed a description error in README

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -43,7 +43,7 @@ We used the following corpora for pre-training:
43
  We first segmented texts in the corpora into words using [Juman++](https://github.com/ku-nlp/jumanpp).
44
  Then, we built a sentencepiece model with 32000 tokens including words ([JumanDIC](https://github.com/ku-nlp/JumanDIC)) and subwords induced by the unigram language model of [sentencepiece](https://github.com/google/sentencepiece).
45
 
46
- We tokenized the segmented corpora into subwords using the sentencepiece model and trained the Japanese BART model using [transformers](https://github.com/huggingface/transformers) library.
47
  The training took 2 weeks using 4 Tesla V100 GPUs.
48
 
49
  The following hyperparameters were used during pre-training:
 
43
  We first segmented texts in the corpora into words using [Juman++](https://github.com/ku-nlp/jumanpp).
44
  Then, we built a sentencepiece model with 32000 tokens including words ([JumanDIC](https://github.com/ku-nlp/JumanDIC)) and subwords induced by the unigram language model of [sentencepiece](https://github.com/google/sentencepiece).
45
 
46
+ We tokenized the segmented corpora into subwords using the sentencepiece model and trained the Japanese BART model using [fairseq](https://github.com/facebookresearch/fairseq) library.
47
  The training took 2 weeks using 4 Tesla V100 GPUs.
48
 
49
  The following hyperparameters were used during pre-training: