MLX
English
mlx-llm
exbert
riccardomusmeci commited on
Commit
50a74ad
1 Parent(s): e537c76

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -4
README.md CHANGED
@@ -19,9 +19,6 @@ Pretrained model on English language using a masked language modeling (MLM) obje
19
  [this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
20
  between english and English.
21
 
22
- Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
23
- the Hugging Face team.
24
-
25
  ## Model description
26
 
27
  Please, refer to the [original model card](https://huggingface.co/bert-base-uncased) for more details on bert-base-uncased.
@@ -42,7 +39,7 @@ from transformers import BertTokenizer
42
  import mlx.core as mx
43
 
44
  model = create_model("bert-base-uncased") # it will download weights from this repository
45
- tokenizer = BertTokenizer.from_pretrained("bert-large-uncased")
46
 
47
  batch = ["This is an example of BERT working on MLX."]
48
  tokens = tokenizer(batch, return_tensors="np", padding=True)
 
19
  [this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
20
  between english and English.
21
 
 
 
 
22
  ## Model description
23
 
24
  Please, refer to the [original model card](https://huggingface.co/bert-base-uncased) for more details on bert-base-uncased.
 
39
  import mlx.core as mx
40
 
41
  model = create_model("bert-base-uncased") # it will download weights from this repository
42
+ tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
43
 
44
  batch = ["This is an example of BERT working on MLX."]
45
  tokens = tokenizer(batch, return_tensors="np", padding=True)