noriyukipy commited on
Commit
5d38c0a
1 Parent(s): 0bf7a3e

Update model card

Browse files
Files changed (1) hide show
  1. README.md +4 -5
README.md CHANGED
@@ -20,9 +20,7 @@ This repository contains a Sentence BERT base model for Japanese.
20
 
21
  ## Pretrained model
22
 
23
- Pretrained BERT model [colorfulscoop/bert-base-ja](https://huggingface.co/colorfulscoop/bert-base-ja) v1.0 is used
24
-
25
- This model is trained on Japanese Wikipedia data and relased under [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/) .
26
 
27
  ## Training data
28
 
@@ -36,11 +34,12 @@ Original training dataset is splitted into train/valid dataset. Finally, follwoi
36
 
37
  ## Model description
38
 
39
- `SentenceTransformer` model from the [sentence-transformers](https://github.com/UKPLab/sentence-transformers) library is used for training.
40
  The model detail is as below.
41
 
42
  ```py
43
- >>> sentence_transformers.SentenceTransformer("colorfulscoop/sbert-base-ja")
 
44
  SentenceTransformer(
45
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
46
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
 
20
 
21
  ## Pretrained model
22
 
23
+ This model utilizes a Japanese BERT model [colorfulscoop/bert-base-ja](https://huggingface.co/colorfulscoop/bert-base-ja) v1.0 released under [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/) as a pretrained model.
 
 
24
 
25
  ## Training data
26
 
 
34
 
35
  ## Model description
36
 
37
+ This model utilizes `SentenceTransformer` model from the [sentence-transformers](https://github.com/UKPLab/sentence-transformers) .
38
  The model detail is as below.
39
 
40
  ```py
41
+ >>> from sentence_transformers import SentenceTransformer
42
+ >>> SentenceTransformer("colorfulscoop/sbert-base-ja")
43
  SentenceTransformer(
44
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
45
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})