gchhablani commited on
Commit
663799f
1 Parent(s): 7e1eb68

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -7,7 +7,7 @@ datasets:
7
  - c4
8
  ---
9
 
10
- # BERT base model (uncased)
11
 
12
  Pretrained model on English language using a masked language modeling (MLM) and next sentence prediction (NSP) objective. It was
13
  introduced in [this paper](https://arxiv.org/abs/2105.03824) and first released in [this repository](https://github.com/google-research/f_net).
@@ -82,8 +82,10 @@ output = model(**encoded_input)
82
  Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. However, the model's MLM accuracy may also affect answers. Given below are some example where gender-bias could be expected:
83
 
84
  ```python
85
- >>> from transformers import pipeline
86
- >>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
 
 
87
  >>> unmasker("The man worked as a [MASK].")
88
 
89
  [
 
7
  - c4
8
  ---
9
 
10
+ # FNet base model
11
 
12
  Pretrained model on English language using a masked language modeling (MLM) and next sentence prediction (NSP) objective. It was
13
  introduced in [this paper](https://arxiv.org/abs/2105.03824) and first released in [this repository](https://github.com/google-research/f_net).
 
82
  Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. However, the model's MLM accuracy may also affect answers. Given below are some example where gender-bias could be expected:
83
 
84
  ```python
85
+ >>> from transformers import FNetForMaskedLM, FNetTokenizer, pipeline
86
+ >>> tokenizer = FNetTokenizer.from_pretrained("google/fnet-base")
87
+ >>> model = FNetForMaskedLM.from_pretrained("google/fnet-base")
88
+ >>> unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer)
89
  >>> unmasker("The man worked as a [MASK].")
90
 
91
  [