metadata
license: apache-2.0
library_name: mlx-llm
language:
- en
tags:
- mlx
- exbert
datasets:
- bookcorpus
- wikipedia
BERT base model (uncased) - MLX
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is uncased: it does not make a difference between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team.
Model description
Please, refer to the original model card for more details on bert-base-uncased.
Use it with mlx-llm
Install mlx-llm
from GitHub.
git clone https://github.com/riccardomusmeci/mlx-llm
cd mlx-llm
pip install .
Run
from mlx_llm.model import create_model
from transformers import BertTokenizer
import mlx.core as mx
model = create_model("bert-base-uncased") # it will download weights from this repository
tokenizer = BertTokenizer.from_pretrained("bert-large-uncased")
batch = ["This is an example of BERT working on MLX."]
tokens = tokenizer(batch, return_tensors="np", padding=True)
tokens = {key: mx.array(v) for key, v in tokens.items()}
output, pooled = model(**tokens)