MLX
English
mlx-llm
exbert
File size: 1,394 Bytes
33231a6
 
e537c76
 
 
 
 
 
 
 
 
33231a6
e537c76
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
license: apache-2.0
library_name: mlx-llm
language:
- en
tags:
- mlx
- exbert
datasets:
- bookcorpus
- wikipedia
---


# BERT base model (uncased) - MLX 

Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.

Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.

## Model description

Please, refer to the [original model card](https://huggingface.co/bert-base-uncased) for more details on bert-base-uncased.

## Use it with mlx-llm

Install `mlx-llm` from GitHub.
```bash
git clone https://github.com/riccardomusmeci/mlx-llm
cd mlx-llm
pip install .
```

Run
```python
from mlx_llm.model import create_model
from transformers import BertTokenizer
import mlx.core as mx

model = create_model("bert-base-uncased") # it will download weights from this repository
tokenizer = BertTokenizer.from_pretrained("bert-large-uncased")

batch = ["This is an example of BERT working on MLX."]
tokens = tokenizer(batch, return_tensors="np", padding=True)
tokens = {key: mx.array(v) for key, v in tokens.items()}

output, pooled = model(**tokens)
```