bclavie commited on
Commit
2097f35
1 Parent(s): 9459040

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +150 -0
README.md ADDED
@@ -0,0 +1,150 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ language:
5
+ - en
6
+ tags:
7
+ - fill-mask
8
+ - masked-lm
9
+ - long-context
10
+ - PyTorch
11
+ - Safetensors
12
+ - modernbert
13
+ pipeline_tag: fill-mask
14
+ ---
15
+
16
+ # ModernBERT
17
+
18
+ ## Table of Contents
19
+ 1. [Model Summary](#model-summary)
20
+ 2. [Usage](#Usage)
21
+ 3. [Evaluation](#Evaluation)
22
+ 4. [Limitations](#limitations)
23
+ 5. [Training](#training)
24
+ 6. [License](#license)
25
+ 7. [Citation](#citation)
26
+
27
+ ## Model Summary
28
+
29
+ ModernBERT is a modernized bidirectional encoder-only Transformer model (BERT-style) pre-trained on 2 trillion tokens of English and code data with a native context length of up to 8,192 tokens. ModernBERT leverages recent architectural improvements such as:
30
+
31
+ - **Rotary Positional Embeddings (RoPE)** for long-context support.
32
+ - **Local-Global Alternating Attention** for efficiency on long inputs.
33
+ - **Unpadding and Flash Attention** for efficient inference.
34
+
35
+ ModernBERT’s native long context length makes it ideal for tasks that require processing long documents, such as retrieval, classification, and semantic search within large corpora. The model was trained on a large corpus of text and code, making it suitable for a wide range of downstream tasks, including code retrieval and hybrid (text + code) semantic search.
36
+
37
+ It is available in the following sizes:
38
+
39
+ - [ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) - 22 layers, 149 million parameters
40
+ - [ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) - 28 layers, 395 million parameters
41
+
42
+ ## Usage
43
+
44
+ You can use these models directly with the `transformers` library. Since ModernBERT is a Masked Language Model (MLM), you can use the `fill-mask` pipeline or load it via `AutoModelForMaskedLM`.
45
+
46
+ Using `AutoModelForMaskedLM`:
47
+
48
+ ```python
49
+ from transformers import AutoTokenizer, AutoModelForMaskedLM
50
+
51
+ checkpoint = "answerdotai/ModernBERT-base"
52
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
53
+ model = AutoModelForMaskedLM.from_pretrained(checkpoint)
54
+
55
+ text = "The capital of France is [MASK]."
56
+ inputs = tokenizer(text, return_tensors="pt")
57
+ outputs = model(**inputs)
58
+
59
+ # To get predictions for the mask:
60
+ logits = outputs.logits
61
+ masked_index = (inputs["input_ids"] == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
62
+ predicted_token_id = logits[0, masked_index].argmax(axis=-1)
63
+ predicted_token = tokenizer.decode(predicted_token_id)
64
+ print("Predicted token:", predicted_token)
65
+ ```
66
+
67
+ Using a pipeline:
68
+
69
+ ```python
70
+ import torch
71
+ from transformers import pipeline
72
+ from pprint import pprint
73
+
74
+ pipe = pipeline(
75
+ "fill-mask",
76
+ model="answerdotai/ModernBERT-base",
77
+ torch_dtype=torch.bfloat16,
78
+ )
79
+
80
+ input_text = "He walked to the [MASK]."
81
+ results = pipe(input_text)
82
+ pprint(results)
83
+ ```
84
+
85
+ To use ModernBERT for downstream tasks like classification, retrieval, or QA, fine-tune it following standard BERT fine-tuning recipes.
86
+
87
+ **Note:** ModernBERT does not use token type IDs, unlike some earlier BERT models. Most downstream usage is identical to standard BERT models on the Hugging Face Hub, except you can omit the `token_type_ids` parameter.
88
+
89
+ ## Evaluation
90
+
91
+ We evaluate ModernBERT across a range of tasks, including natural language understanding (GLUE), general retrieval (BEIR), long-context retrieval (MLDR), and code retrieval (CodeSearchNet and StackQA).
92
+
93
+ **Key highlights:**
94
+ - On GLUE, ModernBERT-base surpasses other similarly-sized encoder models, and ModernBERT-large is second only to Deverta-v3-large.
95
+ - For general retrieval tasks, ModernBERT performs well on BEIR in both single-vector (DPR-style) and multi-vector (ColBERT-style) settings.
96
+ - Thanks to the inclusion of code data in its training mixture, ModernBERT as a backbone also achieves new state-of-the-art code retrieval results on CodeSearchNet and StackQA.
97
+
98
+ ### Base Models
99
+
100
+ | Model | IR (DPR) | IR (DPR) | IR (DPR) | IR (ColBERT) | IR (ColBERT) | NLU | Code | Code |
101
+ |-------------|--------------|--------------|--------------|---------------|---------------|------|------|------|
102
+ | | BEIR | MLDR_OOD | MLDR_ID | BEIR | MLDR_OOD | GLUE | CSN | SQA |
103
+ | BERT | 38.9 | 23.9 | 32.2 | 49.0 | 28.1 | 84.7 | 41.2 | 59.5 |
104
+ | RoBERTa | 37.7 | 22.9 | 32.8 | 48.7 | 28.2 | 86.4 | 44.3 | 59.6 |
105
+ | DeBERTaV3 | 20.2 | 5.4 | 13.4 | 47.1 | 21.9 | 88.1 | 17.5 | 18.6 |
106
+ | NomicBERT | 41.0 | 26.7 | 30.3 | 49.9 | 61.3 | 84.0 | 41.6 | 61.4 |
107
+ | GTE-en-MLM | 41.4 | **34.3** |**44.4** | 48.2 | 69.3 | 85.6 | 44.9 | 71.4 |
108
+ | ModernBERT | **41.6** | 27.4 | 44.0 | **51.3** | **80.2** | **88.4** | **56.4** |**73.6**|
109
+
110
+ ---
111
+
112
+ ### Large Models
113
+
114
+ | Model | IR (DPR) | IR (DPR) | IR (DPR) | IR (ColBERT) | IR (ColBERT) | NLU | Code | Code |
115
+ |-------------|--------------|--------------|--------------|---------------|---------------|------|------|------|
116
+ | | BEIR | MLDR_OOD | MLDR_ID | BEIR | MLDR_OOD | GLUE | CSN | SQA |
117
+ | BERT | 38.9 | 23.3 | 31.7 | 49.5 | 28.5 | 85.2 | 41.6 | 60.8 |
118
+ | RoBERTa | 41.4 | 22.6 | 36.1 | 49.8 | 28.8 | 88.9 | 47.3 | 68.1 |
119
+ | DeBERTaV3 | 25.6 | 7.1 | 19.2 | 46.7 | 23.0 | **91.4**| 21.2 | 19.7 |
120
+ | GTE-en-MLM | 42.5 | **36.4** | **48.9** | 50.7 | 71.3 | 87.6 | 40.5 | 66.9 |
121
+ | ModernBERT | **44.0** | 34.3 | 48.6 | **52.4** | **80.4** | 90.4 |**59.5** |**83.9**|
122
+
123
+ *Table 1: Results for all models across an overview of all tasks. CSN refers to CodeSearchNet and SQA to StackQA. MLDRID refers to in-domain (fine-tuned on the training set) evaluation, and MLDR_OOD to out-of-domain.*
124
+
125
+ ModernBERT’s strong results, coupled with its efficient runtime on long-context inputs, demonstrate that encoder-only models can be significantly improved through modern architectural choices and extensive pretraining on diversified data sources.
126
+
127
+
128
+ ## Limitations
129
+
130
+ ModernBERT’s training data is primarily English and code, so performance may be lower for other languages. While it can handle long sequences efficiently, using the full 8,192 tokens window may be slower than short-context inference. Like any large language model, ModernBERT may produce representations that reflect biases present in its training data. Verify critical or sensitive outputs before relying on them.
131
+
132
+ ## Training
133
+
134
+ - Architecture: Encoder-only, Pre-Norm Transformer with GeGLU activations.
135
+ - Sequence Length: Pre-trained up to 1,024 tokens, then extended to 8,192 tokens.
136
+ - Data: 2 trillion tokens of English text and code.
137
+ - Optimizer: StableAdamW with trapezoidal LR scheduling and 1-sqrt decay.
138
+ - Hardware: Trained on 8x H100 GPUs.
139
+
140
+ See the paper for more details.
141
+
142
+ ## License
143
+
144
+ We release the ModernBERT model architectures, model weights, training codebase under the Apache 2.0 license.
145
+
146
+ ## Citation
147
+
148
+ If you use ModernBERT in your work, please cite:
149
+
150
+ **TODO: Citation**