Text Generation
Transformers
Safetensors
PyTorch
nvidia
SuperQAI2050 suhara commited on
Commit
2c83b69
·
0 Parent(s):

Duplicate from nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16

Browse files

Co-authored-by: Yoshi Suhara <suhara@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,514 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: other
4
+ license_name: nvidia-nemotron-open-model-license
5
+ license_link: >-
6
+ https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-nemotron-open-model-license/
7
+ pipeline_tag: text-generation
8
+ language:
9
+ - en
10
+ - es
11
+ - fr
12
+ - de
13
+ - ja
14
+ - it
15
+ - pt
16
+ - zh
17
+ - ar
18
+ - da
19
+ - ko
20
+ - nl
21
+ - pl
22
+ - ru
23
+ - sv
24
+ - th
25
+ tags:
26
+ - nvidia
27
+ - pytorch
28
+ datasets:
29
+ - nvidia/Nemotron-Pretraining-Code-v1
30
+ - nvidia/Nemotron-CC-v2
31
+ - nvidia/Nemotron-Pretraining-SFT-v1
32
+ - nvidia/Nemotron-CC-Math-v1
33
+ - nvidia/Nemotron-Pretraining-Code-v2
34
+ - nvidia/Nemotron-Pretraining-Specialized-v1
35
+ - nvidia/Nemotron-CC-v2.1
36
+ - nvidia/Nemotron-CC-Code-v1
37
+ - nvidia/Nemotron-Pretraining-Dataset-sample
38
+ track_downloads: true
39
+ ---
40
+
41
+ # NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16
42
+ <div align="center" style="line-height: 1;">
43
+ <a href="https://build.nvidia.com/nvidia/nemotron-3-nano-30b-a3b" target="_blank" style="margin: 2px;">
44
+ <img alt="Chat" src="https://img.shields.io/badge/🤖Chat-Nemotron_3_Nano-536af5?color=76B900&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
45
+ </a>
46
+ <a href="https://arxiv.org/abs/2512.20848" target="_blank" style="margin: 2px;">
47
+ <img alt="Chat" src="https://img.shields.io/badge/📝Paper-Read Now!-536af5?color=76B900&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
48
+ </a>
49
+ <a href="https://huggingface.co/collections/nvidia/nemotron-pre-training-datasets" target="_blank" style="margin: 2px;">
50
+ <img alt="Pre-Training Datasets" src="https://img.shields.io/badge/🗄️_Pre--Training_Datasets-Available_Here-76B900?logoColor=white" style="display: inline-block; vertical-align: middle;"/>
51
+ </a>
52
+ </div>
53
+ <div align="center" style="line-height: 1;">
54
+ <a href="https://developer.nvidia.com/nemotron" target="_blank" style="margin: 2px;">
55
+ <img alt="Homepage" src="https://img.shields.io/badge/🏠Nemotron Developer Page-Learn More Here!-536af5?color=76B900&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
56
+ </a>
57
+ <a href="https://discord.gg/9xpKQtVvrk" target="_blank" style="margin: 2px;">
58
+ <img alt="Homepage" src="https://img.shields.io/badge/Discord-NVIDIA%20AI%20Developer-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
59
+ </a>
60
+ </div>
61
+
62
+ <div align="center" style="line-height: 1;">
63
+ <a href="https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-nemotron-open-model-license/" style="margin: 2px;">
64
+ <img alt="License" src="https://img.shields.io/badge/License-NVIDIA Nemotron Open Model License-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
65
+ </a>
66
+ </div>
67
+
68
+ ![](./accuracy_chart.svg)
69
+
70
+ ## Model Overview
71
+
72
+ **Model Developer:** NVIDIA Corporation
73
+
74
+ **Model Dates:**
75
+
76
+ September 2025 \- December 2025
77
+
78
+ **Data Freshness:**
79
+
80
+ The pre-training data has a cutoff date of June 25, 2025.
81
+
82
+ ## Description
83
+
84
+ NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16 is a base large language model (LLM) trained from scratch by NVIDIA, with the next token prediction loss. It provides a good starting point for instruction fine-tuning.
85
+
86
+ This model is ready for commercial use.
87
+
88
+ ### What is Nemotron?
89
+
90
+ NVIDIA Nemotron™ is a family of open models with open weights, training data, and recipes, delivering leading efficiency and accuracy for building specialized AI agents.
91
+
92
+ ## License/Terms of Use
93
+
94
+ GOVERNING TERMS: Use of this model is governed by the [NVIDIA Nemotron Open Model License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-nemotron-open-model-license/).
95
+
96
+ ## Base Benchmark Evaluations
97
+
98
+ We evaluated our model on the following benchmarks:
99
+
100
+ | Task | Qwen3 30B-A3B-Base | NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16 |
101
+ | :---- | :---- | :---- |
102
+ | **General Knowledge** | | |
103
+ | MMLU (5-shot, acc) | **81.07** | 78.56 |
104
+ | MMLU-Pro (5-shot, CoT EM) | 61.71 | **65.05** |
105
+ | AGIEval-En (3/5-shot, CoT acc) | 63.12 | **68.32** |
106
+ | **Code** | | |
107
+ | HumanEval (0-shot) | 70.73 | **78.05** |
108
+ | MBPP-Sanitized (3-shot) | 73.15 | **75.49** |
109
+ | **Math** | | |
110
+ | GSM8K (8-shot, acc) | 89.01 | **92.34** |
111
+ | MATH (4-shot, acc) | 61.14 | **82.88** |
112
+ | MATH-500 (4-shot, avg@32) | 55.08 | **78.63** |
113
+ | **Commonsense Understanding** | | |
114
+ | ARC-Challenge (25-shot, acc_norm) | **94.45** | 91.89 |
115
+ | HellaSwag (10-shot, acc_norm) | 83.14 | **85.56** |
116
+ | OpenBookQA (0-shot, acc_norm) | 44.80 | **46.20** |
117
+ | PIQA (0-shot, acc_norm) | 81.01 | **84.33** |
118
+ | WinoGrande (5-shot, acc) | 78.22 | **79.64** |
119
+ | **Reading Comprehension** | | |
120
+ | RACE (0-shot, acc) | **90.05** | 88.04 |
121
+ | **Multilingual** | | |
122
+ | MMLU Global Lite (5-shot, avg acc) | **76.84** | 74.47 |
123
+ | MGSM (8-shot, avg acc) | 82.53 | **83.00** |
124
+ | **Long Context** | | |
125
+ | RULER (64K, 0-shot, acc) | 63.55 | **87.50** |
126
+ | RULER (128K, 0-shot, acc) | 60.69 | **82.92** |
127
+ | RULER (256K, 0-shot, acc) | Not Supported | **75.44** |
128
+ | RULER (512K, 0-shot, acc) | Not Supported | **70.56** |
129
+
130
+ All evaluation results were collected via [Nemo Evaluator SDK](https://github.com/NVIDIA-NeMo/Evaluator) and [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness). The open source container on LM Evaluation Harness packaged via NVIDIA's Nemo Evaluator SDK used for evaluations can be found [here](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/eval-factory/containers/lm-evaluation-harness). A reproducibility tutorial along with all configs can be found in [Nemo Evaluator SDK examples](https://github.com/NVIDIA-NeMo/Evaluator/tree/main/packages/nemo-evaluator-launcher/examples/nemotron/nano-v3-reproducibility.md).
131
+
132
+ ### Deployment Geography: Global
133
+
134
+ ### Use Case
135
+
136
+ This model is intended for developers and researchers building instruction-following LLMs.
137
+
138
+ Supported languages include: English, Spanish, French, German, Japanese, Italian, Chinese, Arabic, Hebrew, Hindi, Korean, Czech, Danish, Dutch, Finnish, Polish, Portuguese, Thai, Swedish, and Russian.
139
+
140
+ ### Release Date:
141
+
142
+ December 15, 2025 via [Hugging Face](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16)
143
+
144
+ ## Reference(s)
145
+
146
+ * [NVIDIA Nemotron 3 model family on Hugging Face](https://huggingface.co/collections/nvidia/nvidia-nemotron-v3)
147
+ * [NVIDIA Nemotron 2 model family on Hugging Face](https://huggingface.co/collections/nvidia/nvidia-nemotron-v2)
148
+ * [NVIDIA Nemotron 3 White Paper](https://arxiv.org/abs/2512.20856)
149
+
150
+ ## Model Architecture
151
+
152
+ - **Architecture Type:** Mamba2-Transformer Hybrid Mixture of Experts (MoE)
153
+ - **Network Architecture:** Nemotron Hybrid MoE
154
+
155
+ - **Number of model parameters:** 30B
156
+
157
+ ## Training Methodology
158
+
159
+ Stage 1: Pre-Training
160
+ * [NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16) model was pre-trained using crawled and synthetic code, math, science, and general knowledge data. All datasets are disclosed in the [Training, Testing, and Evaluation Datasets](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16#training-testing-and-evaluation-datasets) section of this document. Major portions of the pre-training corpus are released in the [Nemotron-Pre-Training-Datasets](https://huggingface.co/collections/nvidia/nemotron-pre-training-datasets) collection.
161
+ * Software used for pre-training: [Megatron-LM](https://github.com/NVIDIA/Megatron-LM)
162
+
163
+ The end-to-end training recipe is available in the [NVIDIA Nemotron Developer Repository](https://github.com/NVIDIA-NeMo/Nemotron). Evaluation results can be replicated using the [NeMo Evaluator SDK](https://github.com/NVIDIA-NeMo/Evaluator). More details on the datasets and synthetic data generation methods can be found in the technical report [NVIDIA Nemotron 3 Nano](https://arxiv.org/abs/2512.20848).
164
+
165
+ ## Input
166
+
167
+ - **Input Type(s):** Text
168
+
169
+ - **Input Format(s):** String
170
+
171
+ - **Input Parameters:** One-Dimensional (1D): Sequences
172
+
173
+ - **Maximum input size:** 128K tokens
174
+
175
+ - **Other Properties Related to Input:**
176
+ Supported languages include: English, Spanish, French, German, Japanese, Italian, Chinese, Arabic, Hebrew, Hindi, Korean, Czech, Danish, Dutch, Finnish, Polish, Portuguese, Thai, Swedish, and Russian.
177
+
178
+
179
+ ## Output
180
+
181
+ - **Output Type(s):** Text
182
+
183
+ - **Output Format:** String
184
+
185
+ - **Output Parameters:** One-Dimensional (1D): Sequences
186
+
187
+ - **Maximum output size:** 128K tokens
188
+
189
+
190
+ Our AI models are designed and optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
191
+
192
+ ## Software Integration
193
+
194
+ - Runtime Engine(s): NeMo 25.11.01
195
+ - Supported Hardware Microarchitecture Compatibility: NVIDIA H100-80GB, NVIDIA A100
196
+ - Operating System(s): Linux
197
+
198
+
199
+ The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.
200
+
201
+ ### Use it with Transformers
202
+
203
+ The snippet below shows how to use this model with Huggingface Transformers (tested on version 4.57.3). We recommend using NeMo Framework 25.11.01 to ensure all required libraries are available.
204
+
205
+ Please note that the model supports up to a 1M context size, although the default context size in the Hugging Face configuration is 256k due to higher VRAM requirements.
206
+
207
+ ```
208
+ from transformers import AutoTokenizer, AutoModelForCausalLM
209
+
210
+ model_name = "nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16"
211
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
212
+ model = AutoModelForCausalLM.from_pretrained(
213
+ model_name,
214
+ torch_dtype=torch.bfloat16,
215
+ trust_remote_code=True,
216
+ device_map="auto"
217
+ )
218
+
219
+ prompt = "The capital of France is"
220
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
221
+
222
+ outputs = model.generate(
223
+ **inputs,
224
+ max_new_tokens=32,
225
+ do_sample=False,
226
+ eos_token_id=tokenizer.eos_token_id
227
+ )
228
+
229
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
230
+ ```
231
+
232
+ ## Model Version(s)
233
+
234
+ - v1.0
235
+
236
+ # Training, Testing, and Evaluation Datasets
237
+
238
+ **Data Modality:** Text
239
+ **The total size:** 10,648,823,153,919 Tokens
240
+ **Total number of datasets:** 141
241
+ **Dataset partition:** *Training \[100%\], testing \[0%\], validation \[0%\]*
242
+ **Time period for training data collection:** 2013 to May 1, 2025
243
+ **Time period for testing data collection:** 2013 to May 1, 2025
244
+ **Time period for validation data collection:** 2013 to May 1, 2025
245
+ **Data Collection Method by dataset:** Hybrid: Automated, Human, Synthetic
246
+
247
+ NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16 is pre-trained on a large corpus of high-quality curated and synthetically-generated data. It is trained in the English language, as well as 19 other languages and 43 programming languages. Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. The corpus spans domains including legal, math, science, finance, and more. We also include a small portion of question-answering, and alignment style data to improve model accuracy. The model was trained for approximately 25 trillion tokens.
248
+
249
+ Alongside the model, we release our final [pre-training](https://huggingface.co/collections/nvidia/nemotron-pre-training-datasets) data, as outlined in this section. For ease of analysis, there is a sample set that is ungated. For all remaining code, math and multilingual data, gating and approval is required, and the dataset is permissively licensed for model training purposes.
250
+
251
+ More details on the datasets and synthetic data generation methods can be found in the technical report [NVIDIA Nemotron 3 Nano](https://arxiv.org/abs/2512.20848).
252
+
253
+ ## Public dataset
254
+
255
+ | Dataset | Collection Period |
256
+ | :---- | :---- |
257
+ | [GSM8K](https://github.com/openai/grade-school-math) | 4/23/2025 |
258
+ | [CC-NEWS](https://commoncrawl.org/blog/news-dataset-available) | 4/23/2025 |
259
+ | [Common Crawl](https://commoncrawl.org/) | 4/23/2025 |
260
+ | [Wikimedia](https://dumps.wikimedia.org/) | 4/23/2025 |
261
+ | [Bespoke-Stratos-17k](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k) | 4/23/2025 |
262
+ | [tigerbot-kaggle-leetcodesolutions-en-2k](https://huggingface.co/datasets/TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k) | 4/23/2025 |
263
+ | [glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2) | 4/23/2025 |
264
+ | [APIGen Function-Calling](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k) | 4/23/2025 |
265
+ | [LMSYS-Chat-1M](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) | 4/23/2025 |
266
+ | [Open Textbook Library \- CC BY-SA & GNU subset](https://open.umn.edu/opentextbooks/textbooks/) and [OpenStax \- CC BY-SA subset](https://openstax.org/) | 4/23/2025 |
267
+ | [Advanced Reasoning Benchmark](https://github.com/TheDuckAI/arb), [tigerbot-kaggle-leetcodesolutions-en-2k](https://huggingface.co/datasets/TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k), [PRM800K](https://github.com/openai/prm800k), and [SciBench](https://github.com/mandyyyyii/scibench) | 4/23/2025 |
268
+ | [FineWeb-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) | 4/23/2025 |
269
+ | [Court Listener](https://www.courtlistener.com/help/api/bulk-data/) | Legacy Download |
270
+ | [peS2o](https://huggingface.co/datasets/allenai/peS2o) | Legacy Download |
271
+ | [OpenWebMath](https://huggingface.co/datasets/open-web-math/open-web-math) | Legacy Download |
272
+ | [BioRxiv](https://www.biorxiv.org/tdm) | Legacy Download |
273
+ | [PMC Open Access Subset](https://pmc.ncbi.nlm.nih.gov/tools/openftlist/) | Legacy Download |
274
+ | [OpenWebText2](https://openwebtext2.readthedocs.io/en/latest/) | Legacy Download |
275
+ | [Stack Exchange Data Dump](https://archive.org/details/stackexchange) | Legacy Download |
276
+ | [PubMed Abstracts](https://github.com/thoppe/The-Pile-PubMed) | Legacy Download |
277
+ | [NIH ExPorter](https://exporter.nih.gov/ExPORTER_Catalog.aspx) | Legacy Download |
278
+ | [arXiv](https://info.arxiv.org/help/bulk_data/index.html) | Legacy Download |
279
+ | [BigScience Workshop Datasets](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#datasets) | Legacy Download |
280
+ | [Reddit Dataset](https://files.pushshift.io/reddit/) | Legacy Download |
281
+ | [SEC's Electronic Data Gathering, Analysis, and Retrieval (EDGAR)](https://www.sec.gov/search-filings) | Legacy Download |
282
+ | [Advanced Mathematical Problem Solving](https://github.com/hendrycks/math?tab=readme-ov-file) | Legacy Download |
283
+ | [MathPile](https://github.com/GAIR-NLP/MathPile/) | Legacy Download |
284
+ | [NuminaMath CoT](https://huggingface.co/datasets/AI-MO/NuminaMath-CoT) | Legacy Download |
285
+ | [PMC Article](https://pmc.ncbi.nlm.nih.gov/tools/textmining/) | Legacy Download |
286
+ | [FLAN](https://github.com/google-research/FLAN) | Legacy Download |
287
+ | [Advanced Reasoning Benchmark](https://github.com/TheDuckAI/arb) | Legacy Download |
288
+ | [SciBench](https://github.com/mandyyyyii/scibench) | Legacy Download |
289
+ | [WikiTableQuestions](https://huggingface.co/datasets/wikitablequestions) | Legacy Download |
290
+ | [FinQA](https://finqasite.github.io/) | Legacy Download |
291
+ | [Riddles](https://github.com/crawsome/riddles) | Legacy Download |
292
+ | [Problems in Elementary Mathematics for Home Study](https://archive.org/details/AntonovVygodskyNikitinSankinProblemsInElementaryMathematicsForHomeStudyMir1982) | Legacy Download |
293
+ | [MedMCQA](https://huggingface.co/datasets/openlifescienceai/medmcqa) | Legacy Download |
294
+ | [Cosmos QA](https://huggingface.co/datasets/allenai/cosmos_qa) | Legacy Download |
295
+ | [MCTest](https://huggingface.co/datasets/sagnikrayc/mctest) | Legacy Download |
296
+ | [AI2's Reasoning Challenge](https://huggingface.co/datasets/ai2_arc) | Legacy Download |
297
+ | [OpenBookQA](https://github.com/allenai/OpenBookQA) | Legacy Download |
298
+ | [MMLU Auxiliary Train](https://huggingface.co/datasets/cais/mmlu/viewer/all/auxiliary_train) | Legacy Download |
299
+ | [social-chemestry-101](https://huggingface.co/datasets/tasksource/social-chemestry-101) | Legacy Download |
300
+ | [Moral Stories](https://huggingface.co/datasets/demelin/moral_stories) | Legacy Download |
301
+ | [The Common Pile v0.1](https://huggingface.co/common-pile) | Legacy Download |
302
+ | [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath) | Legacy Download |
303
+ | [MegaMath](https://huggingface.co/datasets/LLM360/MegaMath) | Legacy Download |
304
+ | [MegaMath](https://huggingface.co/datasets/LLM360/MegaMath) | Legacy Download |
305
+ | [MultiverseMathHard](https://huggingface.co/datasets/Nexusflow/MultiverseMathHard) | 10/2/2025 |
306
+ | [News Commentary](https://opus.nlpl.eu/News-Commentary.php) | 10/2/2025 |
307
+ | [Essential-Web](https://huggingface.co/datasets/EssentialAI/essential-web-v1.0) | 10/2/2025 |
308
+ | [finepdfs](https://huggingface.co/datasets/HuggingFaceFW/finepdfs) | 10/2/2025 |
309
+ | [HotpotQA](https://huggingface.co/hotpot_qa/datasets) | 10/2/2025 |
310
+ | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | 10/2/2025 |
311
+ | [NLTK Words Lists](https://www.nltk.org/nltk_data/) | 10/2/2025 |
312
+
313
+ ## Private Non-publicly Accessible Datasets of Third Parties
314
+
315
+ | Dataset |
316
+ | :---- |
317
+ | Global Regulation |
318
+ | TAUS Translation Memory |
319
+ | Scale HLE |
320
+ | HackerRank Coding |
321
+
322
+ ## Private Non-publicly Accessible Datasets by NVIDIA
323
+
324
+ | Dataset |
325
+ | :---- |
326
+ | Machine Translation of STEM data using [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) |
327
+
328
+ ## Crawled and Scraped from Online Sources by NVIDIA
329
+
330
+ The English Common Crawl data was downloaded from the Common Crawl Foundation (see their FAQ for details on their crawling) and includes the snapshots CC-MAIN-2013-20 through CC-MAIN-2025-13. The data was subsequently deduplicated and filtered in various ways described in the Nemotron-CC paper. Additionally, we extracted data for fifteen languages from the following three Common Crawl snapshots: CC-MAIN-2024-51, CC-MAIN-2025-08, CC-MAIN-2025-18. The fifteen languages included were Arabic, Chinese, Danish, Dutch, French, German, Italian, Japanese, Korean, Polish, Portuguese, Russian, Spanish, Swedish, and Thai. As we did not have reliable multilingual model-based quality classifiers available, we applied just heuristic filtering instead—similar to what we did for lower quality English data in the Nemotron-CC pipeline, but selectively removing some filters for some languages that did not work well. Deduplication was done in the same way as for Nemotron-CC.
331
+
332
+ The GitHub Crawl was collected using the GitHub REST API and the Amazon S3 API. Each crawl was operated in accordance with the rate limits set by its respective source, either GitHub or S3. We collect raw source code and subsequently remove any having a license which does not exist in our permissive-license set (for additional details, refer to the [technical report](https://arxiv.org/abs/2512.20848)).
333
+
334
+ | Dataset | Modality | Dataset Size | Collection Period | Collecting Organisation |
335
+ | :---- | :---- | :---- | :---- | :---- |
336
+ | English Common Crawl | Text | 3.36T | 4/8/2025 | NVIDIA Advanced Deep Learning Research |
337
+ | English Common Crawl 1.1 | Text | Not disclosed | 10/2/2025 | NVIDIA Advanced Deep Learning Research |
338
+ | Multilingual Common Crawl | Text | 812.7B | 5/1/2025 | NVIDIA Advanced Deep Learning Research |
339
+ | GitHub Crawl | Text | 747.4B | 4/29/2025 | NVIDIA Advanced Deep Learning Research |
340
+
341
+ ## NVIDIA-Sourced Synthetic Datasets
342
+
343
+ | Dataset | Modality | Dataset Size | Seed Dataset | Model(s) used for generation |
344
+ | :---- | :---- | :---- | :---- | :---- |
345
+ | Synthetic Art of Problem Solving from DeepSeek-R1 | Text | 40B | [Art of Problem Solving](https://artofproblemsolving.com/company); [American Mathematics Competitions 8](https://artofproblemsolving.com/wiki/index.php/AMC_8_Problems_and_Solutions); [American Mathematics Competitions 10](https://artofproblemsolving.com/wiki/index.php/AMC_10_Problems_and_Solutions); | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
346
+ | Synthetic Moral Stories and Social Chemistry from Mixtral-8x22B-v0.1 | Text | 327M | [social-chemestry-101](https://huggingface.co/datasets/tasksource/social-chemestry-101); [Moral Stories](https://huggingface.co/datasets/demelin/moral_stories) | [Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1) |
347
+ | Synthetic Social Sciences seeded with OpenStax from DeepSeek-V3, Mixtral-8x22B-v0.1, and Qwen2.5-72B | Text | 83.6M | [OpenStax \- CC BY-SA subset](https://openstax.org/) | [DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3); [Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1); [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) |
348
+ | Synthetic Health Sciences seeded with OpenStax from DeepSeek-V3, Mixtral-8x22B-v0.1, and Qwen2.5-72B | Text | 9.7M | [OpenStax \- CC BY-SA subset](https://openstax.org/) | [DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3); [Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1); [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) |
349
+ | Synthetic STEM seeded with OpenStax, Open Textbook Library, and GSM8K from DeepSeek-R1, DeepSeek-V3, DeepSeek-V3-0324, and Qwen2.5-72B | Text | 175M | [OpenStax \- CC BY-SA subset](https://openstax.org/); [GSM8K](https://github.com/openai/grade-school-math); [Open Textbook Library \- CC BY-SA & GNU subset](https://open.umn.edu/opentextbooks/textbooks/) | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1), [DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3); [DeepSeek-V3-0324](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324); [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) |
350
+ | [Nemotron-PrismMath](https://huggingface.co/datasets/nvidia/Nemotron-PrismMath) | Text | 4.6B | [Big-Math-RL-Verified](https://huggingface.co/datasets/SynthLabsAI/Big-Math-RL-Verified); [OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) | [Qwen2.5-0.5B-instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct), [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct); [DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) |
351
+ | Synthetic Question Answering Data from Papers and Permissible Books from Qwen2.5-72B-Instruct | Text | 350M | [arXiv](https://info.arxiv.org/help/bulk_data/index.html); [National Institutes of Health ExPorter](https://www.nih.gov/); [BioRxiv](https://www.biorxiv.org/tdm); [PMC Article](https://pmc.ncbi.nlm.nih.gov/tools/textmining/); [USPTO Backgrounds](https://data.uspto.gov/apis/transition-guide/bdss#pats); [peS2o](https://huggingface.co/datasets/allenai/peS2o); Global Regulation; [CORE](https://core.ac.uk/documentation/dataset); [PG-19](https://github.com/google-deepmind/pg19); [DOAB CC BY & CC BY-SA subset](https://www.doabooks.org/en); [NDLTD](https://ndltd.org/thesis-resources/global-etd-search/) | [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) |
352
+ | Synthetic Rephrased [Math Data from Common Crawl](https://huggingface.co/datasets/nvidia/Nemotron-MIND) from phi-4 | Text | 73B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) |
353
+ | Synthetic Math Data from Common Crawl 4plus | Text | 52.3B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) |
354
+ | Synthetic Math Data from Common Crawl 3 | Text | 80.9B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) |
355
+ | Synthetic AGIEval seeded with AQUA-RAT, LogiQA, and AR-LSAT from DeepSeek-V3 and DeepSeek-V3-0324 | Text | 4.0B | [AQUA-RAT](https://huggingface.co/datasets/deepmind/aqua_rat); [LogiQA](https://huggingface.co/datasets/lucasmccabe/logiqa); [AR-LSAT](https://github.com/zhongwanjun/AR-LSAT) | [DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3); [DeepSeek-V3-0324](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324) |
356
+ | Synthetic AGIEval seeded with AQUA-RAT, LogiQA, and AR-LSAT from Qwen3-30B-A3B | Text | 4.2B | [AQUA-RAT](https://huggingface.co/datasets/deepmind/aqua_rat); [LogiQA](https://huggingface.co/datasets/lucasmccabe/logiqa); [AR-LSAT](https://github.com/zhongwanjun/AR-LSAT) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) |
357
+ | Synthetic Art of Problem Solving from Qwen2.5-32B-Instruct, Qwen2.5-Math-72B, Qwen2.5-Math-7B, and Qwen2.5-72B-Instruct | Text | | [Art of Problem Solving](https://artofproblemsolving.com/company); [American Mathematics Competitions 8](https://artofproblemsolving.com/wiki/index.php/AMC_8_Problems_and_Solutions); [American Mathematics Competitions 10](https://artofproblemsolving.com/wiki/index.php/AMC_10_Problems_and_Solutions); [GSM8K](https://github.com/openai/grade-school-math); [PRM800K](https://github.com/openai/prm800k) | [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct); [Qwen2.5-Math-72B](https://huggingface.co/Qwen/Qwen2.5-Math-72B); [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B); [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) |
358
+ | Synthetic MMLU Auxiliary Train from DeepSeek-R1 | Text | 0.5B | [MMLU Auxiliary Train](https://huggingface.co/datasets/cais/mmlu/viewer/all/auxiliary_train) | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
359
+ | Synthetic Long Context Continued Post-Training Data from Papers and Permissible Books from Qwen2.5-72B-Instruct | Text | | [arXiv](https://info.arxiv.org/help/bulk_data/index.html); [National Institutes of Health ExPorter](https://www.nih.gov/); [BioRxiv](https://www.biorxiv.org/tdm); [PMC Article](https://pmc.ncbi.nlm.nih.gov/tools/textmining/); [USPTO Backgrounds](https://data.uspto.gov/apis/transition-guide/bdss#pats); [peS2o](https://huggingface.co/datasets/allenai/peS2o); Global Regulation; [CORE](https://core.ac.uk/documentation/dataset); [PG-19](https://github.com/google-deepmind/pg19); [DOAB CC BY & CC BY-SA subset](https://www.doabooks.org/en); [NDLTD](https://ndltd.org/thesis-resources/global-etd-search/) | [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) |
360
+ | Synthetic Common Crawl from Qwen3-30B-A3B and Mistral-Nemo-12B-Instruct | Text | 415.8B | [Common Crawl](https://commoncrawl.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B); [Mistral-NeMo-12B-Instruct](https://huggingface.co/nvidia/Mistral-NeMo-12B-Instruct) |
361
+ | Synthetic Multilingual Data from Common Crawl from Qwen3-30B-A3B | Text | | [Common Crawl](https://commoncrawl.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) |
362
+ | Synthetic Multilingual Data from Wikimedia from Qwen3-30B-A3B | Text | | [Wikimedia](https://dumps.wikimedia.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) |
363
+ | Synthetic Math Data from Wikimedia from Nemotron-4-340B-Instruct | Text | | \- | [Nemotron-4-340B-Instruct](https://huggingface.co/nvidia/Nemotron-4-340B-Instruct) |
364
+ | Synthetic Common Crawl Code from phi-4 | Text | 427.9B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) |
365
+ | Synthetic Scientific Coding from Qwen3-235B-A22B | Text | 1.2B | [Wikimedia](https://dumps.wikimedia.org/) | [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507) |
366
+ | Tool Calling Data | Text | 26.2B | | [Qwen3-235B-A22B-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507); [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b) |
367
+ | Synthetic Essential-Web from QwQ-32B | Text | 28.1B | [Essential-Web](https://huggingface.co/datasets/EssentialAI/essential-web-v1.0) | [QwQ-32B](https://huggingface.co/Qwen/QwQ-32B) |
368
+ | Translated Synthetic Crawl | Text | 389.9B | [Common Crawl](https://commoncrawl.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) |
369
+ | Translated Synthetic Wikipedia | Text | 7.9B | [Wikimedia](https://dumps.wikimedia.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) |
370
+ | Synthetic Long Context from Qwen3-235B-A22B-Instruct-2507 | Text | Undisclosed | [CORE](https://core.ac.uk/documentation/dataset); [PG-19](https://github.com/google-deepmind/pg19); [DOAB CC BY & CC BY-SA subset](https://www.doabooks.org/en); [NDLTD](https://ndltd.org/thesis-resources/global-etd-search/) | [Qwen3-235B-A22B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507) |
371
+ | Synthetic Search STEM OPENQ from DeepSeek-R1-0528 | Text | Undisclosed | - | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
372
+ | Synthetic MCQ from Qwen2.5-32B-Instruct and DeepSeek-R1-0528 | Text | Undisclosed | - | [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct); [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
373
+ | Synthetic Offline Search MCQA HLE from DeepSeek-R1-0528 | Text | Undisclosed | - | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
374
+ | Synthetic Offline Search MCQA GPQA from Qwen3-235B-A22B and DeepSeek-R1-0528 | Text | Undisclosed | - | [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B); [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
375
+ | Synthetic Human Preference from QwQ-32B, Qwen3-30B-A3B, Qwen3-235B-A22B, Qwen3-235B-A22B-Instruct-2507, Mistral-Small-3.1-24B-Instruct-2503, Mistral-Small-3.2-24B-Instruct-2506, MiniMax-M1-80k, MiniMax-M1-40k, Kimi-K2-Instruct, DeepSeek-V3-0324, DeepSeek-R1-0528 | Text | Undisclosed | - | [QwQ-32B](https://huggingface.co/Qwen/QwQ-32B); [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B); [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B); [Qwen3-235B-A22B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507); [Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503); [Mistral-Small-3.2-24B-Instruct-2506](https://huggingface.co/mistralai/Mistral-Small-3.2-24B-Instruct-2506); [MiniMax-M1-80k](https://huggingface.co/MiniMaxAI/MiniMax-M1-80k); [MiniMax-M1-40k](https://huggingface.co/MiniMaxAI/MiniMax-M1-40k); [Kimi-K2-Instruct](https://huggingface.co/moonshotai/Kimi-K2-Instruct); [DeepSeek-V3-0324](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324); [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
376
+ | Synthetic Code from Qwen3-32B | Text | Undisclosed | English Common Crawl; English Common Crawl 1.1 | [Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B) |
377
+ | Synthetic OpenCodeReasoning from DeepSeek-R1 | Text | Undisclosed | [OpenCodeReasoning](https://huggingface.co/datasets/nvidia/OpenCodeReasoning) | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
378
+ | Synthetic LIMO from DeepSeek-R1-0528 | Text | Undisclosed | [LIMO](https://huggingface.co/datasets/GAIR/LIMO) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
379
+ | Synthetic SCP from DeepSeek-R1-0528 | Text | Undisclosed | [SCP-116K](https://huggingface.co/datasets/EricLu/SCP-116K) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
380
+ | Synthetic Stack Exchange from DeepSeek-R1-0528 | Text | Undisclosed | [Stack Exchange](https://archive.org/details/stackexchange) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
381
+ | Synthetic Common Crawl from Qwen3-30B-A3B | Text | Undisclosed | [Common Crawl](https://commoncrawl.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) |
382
+ | Synthetic Wikipedia from Qwen3-30B-A3B | Text | Undisclosed | [Wikimedia](https://dumps.wikimedia.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) |
383
+ | Synthetic Essential-Web from Qwen3-30B-A3B and Qwen3-235B-A22B-Thinking-2507 | Text | Undisclosed | [Essential-Web](https://huggingface.co/datasets/EssentialAI/essential-web-v1.0) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B); [Qwen3-235B-A22B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507) |
384
+ | Synthetic Textbook Math from Qwen3-30B-A3B, Qwen3-235B-A22B, phi-4 | Text | Undisclosed | [Common Crawl](https://commoncrawl.org/); [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B); [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B); [phi-4](https://huggingface.co/microsoft/phi-4) |
385
+ | Synthetic Math and Code from DeepSeek-R1 and DeepSeek-R1-0528 | Text | Undisclosed | [Magicoder-Evol-Instruct-110K](https://huggingface.co/datasets/ise-uiuc/Magicoder-Evol-Instruct-110K); [opc-sft-stage2](https://huggingface.co/datasets/OpenCoder-LLM/opc-sft-stage2); [TACO](https://huggingface.co/datasets/BAAI/TACO); [OpenCodeReasoning](https://huggingface.co/datasets/nvidia/OpenCodeReasoning); [OpenMathReasoning](https://huggingface.co/datasets/nvidia/OpenMathReasoning); [NuminaMath CoT](https://huggingface.co/datasets/AI-MO/NuminaMath-CoT) | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1); [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
386
+
387
+
388
+ ## Training Dataset
389
+
390
+ | Dataset | \# of Tokens in Nemotron Nano 2 | \# of Tokens in Nemotron 3 Nano |
391
+ | :---- | :---- | :---- |
392
+ | English Common Crawl | 3,360,110,334,818 | 3,456,523,212,210 |
393
+ | English Synthetic CC | 1,949,464,641,123 | 4,340,740,677,920 |
394
+ | Crawl++ | 360,389,153,262 | 360,389,153,262 |
395
+ | Math | 124,606,230,663 | 154,217,502,165 |
396
+ | Synthetic Math | 73,007,767,155 | 73,007,767,155 |
397
+ | Code | 747,409,228,724 | 1,043,856,922,136 |
398
+ | Synthetic Code | 175,067,553,293 | 453,117,917,176 |
399
+ | Common Crawl Code | 0 | 263,072,374,097 |
400
+ | English Wiki | 17,349,266,926 | 17,349,266,926 |
401
+ | Synthetic Wiki | 0 | 7,850,648,552 |
402
+ | Books | 0 | 0 |
403
+ | Papers | 191,586,493,365 | 191,586,493,365 |
404
+ | PDF-to-text | 141,096,578,533 | 141,096,578,533 |
405
+ | Code SFT | 60,025,726,817 | 102,863,752,325 |
406
+ | STEM SFT | 272,680,426,295 | 359,826,214,274 |
407
+ | General SFT | 6,057,478,645 | 6,057,478,645 |
408
+ | Tool-Calling SFT | 0 | 26,244,716,867 |
409
+ | Multilingual | 2,172,261,909,350 | 1,743,892,490,859|
410
+ | Synthetic multilingual | 997,710,364,950 | 595,140,661,135 |
411
+ | **Total** | **10,648,823,153,919** | **13,336,833,827,602** |
412
+
413
+ We use a considerable amount of synthetic data. Out of 10.6 trillion tokens, 3,534,013,958,278 tokens are synthetically generated.
414
+
415
+ We extracted data for fifteen languages from the following three Common Crawl snapshots: CC-MAIN-2024-51, CC-MAIN-2025-08, CC-MAIN-2025-18. The fifteen languages included were Arabic, Chinese, Danish, Dutch, French, German, Italian, Japanese, Korean, Polish, Portuguese, Russian, Spanish, Swedish, and Thai. As we did not have reliable multilingual model-based quality classifiers available, we applied just heuristic filtering instead—similar to what we did for lower quality English data in the Nemotron-CC pipeline, but selectively removing some filters for some languages that did not work well. Deduplication was done in the same way as for Nemotron-CC. Additionally, we used data from Wikipedia and FineWeb-2 (Penedo et al., 2025\) for these fifteen languages.
416
+
417
+ | Language | Total Tokens |
418
+ | :---- | :---- |
419
+ | Arabic | 118,056,362,726 |
420
+ | Danish | 117,747,321,618 |
421
+ | German | 146,613,691,781 |
422
+ | Spanish | 469,156,575,409 |
423
+ | French | 139,982,002,289 |
424
+ | Italian | 298,858,370,174 |
425
+ | Japanese | 682,755,693,336 |
426
+ | Korean | 127,099,747,538 |
427
+ | Dutch | 89,041,592,681 |
428
+ | Polish | 105,356,493,147 |
429
+ | Portuguese | 243,249,275,089 |
430
+ | Russian | 185,314,014,057 |
431
+ | Swedish | 74,954,953,299 |
432
+ | Thai | 160,778,944,467 |
433
+ | Chinese | 211,007,236,689 |
434
+
435
+ We collect a total of 922,476,782,017 tokens of code in 43 different languages.
436
+
437
+ | Language | Tokens |
438
+ | :---- | :---- |
439
+ | Assembly | 750,628,764 |
440
+ | C | 42,657,300,868 |
441
+ | C\# | 56,153,329,307 |
442
+ | C++ | 67,773,701,658 |
443
+ | CommonLisp | 263,234,672 |
444
+ | CSS | 38,848,760,035 |
445
+ | Cuda | 400,222,993 |
446
+ | Dart | 3,816,960,470 |
447
+ | Dockerfile | 474,958,084 |
448
+ | Fortran | 1,105,049,387 |
449
+ | Go | 8,332,419,480 |
450
+ | Haskell | 1,294,613,669 |
451
+ | HTML | 69,082,117,487 |
452
+ | Java | 131,440,465,822 |
453
+ | JavaScript | 75,573,420,861 |
454
+ | JSON | 15,366,881,241 |
455
+ | Julia | 621,046,949 |
456
+ | JupyterNotebook | 2,241,893,197 |
457
+ | Lua | 4,146,420,802 |
458
+ | Makefile | 12,640,010,879 |
459
+ | Markdown | 64,796,743,311 |
460
+ | Mathematica | 320,504,225 |
461
+ | OmniversePython | 26,946,093 |
462
+ | Pascal | 1,625,013,876 |
463
+ | Perl | 1,575,314,434 |
464
+ | PHP | 61,575,339,005 |
465
+ | Python | 126,916,727,384 |
466
+ | R | 19,811,381,935 |
467
+ | reStructuredText | 1,779,876,391 |
468
+ | Ruby | 6,446,962,615 |
469
+ | Rust | 4,438,640,533 |
470
+ | Scala | 3,343,959,154 |
471
+ | Shell | 18,758,779,250 |
472
+ | SQL | 23,205,633,085 |
473
+ | Swift | 5,976,714,881 |
474
+ | SystemVerilog | 233,056,185 |
475
+ | TeX | 7,347,157,527 |
476
+ | TypeScript | 15,657,838,582 |
477
+ | Verilog | 811,884,369 |
478
+ | VHDL | 648,401,444 |
479
+ | VisualBasic.NET | 1,005,680,881 |
480
+ | XML | 12,616,779,741 |
481
+ | YAML | 10,574,010,491 |
482
+
483
+ ## Evaluation Dataset
484
+
485
+ * Data Collection Method by dataset: Hybrid: Human, Synthetic
486
+ * Labeling Method by dataset: Hybrid: Automated, Human, Synthetic
487
+
488
+ ## Inference
489
+
490
+ - Engines: HF, vLLM, TRT-LLM
491
+
492
+ - Test Hardware: NVIDIA A100 80GB, H100 80GB
493
+
494
+ ## Ethical Considerations
495
+
496
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our [Trustworthy AI terms of service](https://www.nvidia.com/en-us/agreements/trustworthy-ai/terms/), developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
497
+
498
+ We advise against circumvention of any provided safety guardrails contained in the Model without a substantially similar guardrail appropriate for your use case. For more details: [Safety & Security](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16/blob/main/safety.md).
499
+
500
+ For more detailed information on ethical considerations for this model, please see the Model Card++ [Bias](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16/blob/main/bias.md), [Explainability](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16/blob/main/explainability.md), and [Privacy](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16/blob/main/privacy.md) Subcards.
501
+
502
+ Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
503
+
504
+ ## Citation
505
+
506
+ ```
507
+ @misc{nvidia_nemotron_nano_v3_2025,
508
+ title = {{Nemotron 3 Nano}: Open, Efficient Mixture-of-Experts Hybrid {Mamba}-{Transformer} Model for {Agentic} Reasoning},
509
+ author = {{NVIDIA}},
510
+ year = {2025},
511
+ url = {https://arxiv.org/abs/2512.20848},
512
+ note = {Technical report}
513
+ }
514
+ ```
accuracy_chart.svg ADDED
bias.md ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ | Field | Response |
2
+ | :---- | :---- |
3
+ | Participation considerations from adversely impacted groups [protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing: | None |
4
+ | Bias Metric (If Measured): | Not Relevant for the Base Model |
5
+ | Which characteristic (feature) show(s) the greatest difference in performance?: | The model shows high variance in the characteristics when it is used with a high temperature. |
6
+ | Which feature(s) have the worst performance overall? | Not Relevant for the Base Model |
7
+ | Measures taken to mitigate against unwanted bias: | Not Applicable |
8
+ | If using internal data, description of methods implemented in data acquisition or processing, if any, to address the prevalence of identifiable biases in the training, testing, and validation data: | The training datasets contain a large amount of synthetic data generated by LLMs. We manually curated prompts. |
9
+ | Tools used to assess statistical imbalances and highlight patterns that may introduce bias into AI models: | These datasets, such as Common Crawl, CC-News, and Wikimedia, do not collectively or exhaustively represent all demographic groups (and proportionally therein). For instance, these datasets do not contain explicit mentions of demographic classes such as age, gender, or ethnicity in over 85% of samples. In the subset where such terms are present, Common Crawl and CC-News contain notable representational skews—for example, references to "male" significantly outnumber those to "female," and mentions of "White" are the most frequent among ethnic identifiers. To mitigate these imbalances, we recommend considering evaluation techniques such as bias audits, fine-tuning with demographically balanced datasets, and mitigation strategies like counterfactual data augmentation to align with the desired model behavior. This evaluation used a 3,000-sample subset per dataset, identified as the optimal threshold for maximizing embedder accuracy, and includes outputs from uncalibrated embedders; as such, certain limitations may exist in the reliability of the embedding. |
config.json ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "NemotronHForCausalLM"
4
+ ],
5
+ "attention_bias": false,
6
+ "attention_dropout": 0.0,
7
+ "auto_map": {
8
+ "AutoConfig": "configuration_nemotron_h.NemotronHConfig",
9
+ "AutoModel": "modeling_nemotron_h.NemotronHForCausalLM",
10
+ "AutoModelForCausalLM": "modeling_nemotron_h.NemotronHForCausalLM"
11
+ },
12
+ "bos_token_id": 1,
13
+ "chunk_size": 128,
14
+ "conv_kernel": 4,
15
+ "dtype": "bfloat16",
16
+ "eos_token_id": 2,
17
+ "expand": 2,
18
+ "head_dim": 128,
19
+ "hidden_dropout": 0.0,
20
+ "hidden_size": 2688,
21
+ "hybrid_override_pattern": "MEMEM*EMEMEM*EMEMEM*EMEMEM*EMEMEM*EMEMEMEM*EMEMEMEME",
22
+ "initializer_range": 0.02,
23
+ "intermediate_size": 1856,
24
+ "layer_norm_epsilon": 1e-05,
25
+ "mamba_head_dim": 64,
26
+ "mamba_hidden_act": "silu",
27
+ "mamba_num_heads": 64,
28
+ "mamba_proj_bias": false,
29
+ "max_position_embeddings": 262144,
30
+ "mlp_bias": false,
31
+ "mlp_hidden_act": "relu2",
32
+ "model_type": "nemotron_h",
33
+ "moe_intermediate_size": 1856,
34
+ "moe_shared_expert_intermediate_size": 3712,
35
+ "n_group": 1,
36
+ "n_groups": 8,
37
+ "n_routed_experts": 128,
38
+ "n_shared_experts": 1,
39
+ "norm_eps": 1e-05,
40
+ "norm_topk_prob": true,
41
+ "num_attention_heads": 32,
42
+ "num_experts_per_tok": 6,
43
+ "num_hidden_layers": 52,
44
+ "num_key_value_heads": 2,
45
+ "num_logits_to_keep": 1,
46
+ "pad_token_id": 0,
47
+ "partial_rotary_factor": 1.0,
48
+ "rescale_prenorm_residual": true,
49
+ "residual_in_fp32": false,
50
+ "rope_theta": 10000,
51
+ "routed_scaling_factor": 2.5,
52
+ "sliding_window": null,
53
+ "ssm_state_size": 128,
54
+ "tie_word_embeddings": false,
55
+ "time_step_floor": 0.0001,
56
+ "time_step_limit": [
57
+ 0.0,
58
+ Infinity
59
+ ],
60
+ "time_step_max": 0.1,
61
+ "time_step_min": 0.001,
62
+ "topk_group": 1,
63
+ "transformers_version": "4.57.1",
64
+ "use_bias": false,
65
+ "use_cache": true,
66
+ "use_conv_bias": true,
67
+ "use_mamba_kernels": true,
68
+ "vocab_size": 131072
69
+ }
configuration_nemotron_h.py ADDED
@@ -0,0 +1,262 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 AI21 Labs Ltd. and the HuggingFace Inc. team. All rights reserved.
3
+ # Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+ # limitations under the License.
16
+ """NemotronH model configuration"""
17
+
18
+ import re
19
+
20
+ from transformers.configuration_utils import PretrainedConfig
21
+ from transformers.utils import logging
22
+
23
+
24
+ logger = logging.get_logger(__name__)
25
+
26
+
27
+ class NemotronHConfig(PretrainedConfig):
28
+ r"""
29
+ This is the configuration class to store the configuration of a [`NemotronHModel`]. It is used to instantiate a
30
+ NemotronH model according to the specified arguments, defining the model architecture. Instantiating a configuration
31
+ with the defaults will yield a similar configuration to that of the NemotronH-v0.1 model.
32
+
33
+ [todo](todo)
34
+
35
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
36
+ documentation from [`PretrainedConfig`] for more information.
37
+
38
+
39
+ Args:
40
+ vocab_size (`int`, *optional*, defaults to 131072):
41
+ Vocabulary size of the NemotronH model. Defines the number of different tokens that can be represented by the
42
+ `inputs_ids` passed when calling [`NemotronHModel`]
43
+ tie_word_embeddings (`bool`, *optional*, defaults to `False`):
44
+ Whether the model's input and output word embeddings should be tied. Note that this is only relevant if the
45
+ model has a output word embedding layer.
46
+ hidden_size (`int`, *optional*, defaults to 4096):
47
+ Dimension of the hidden representations.
48
+ intermediate_size (`int`, *optional*, defaults to 21504):
49
+ Dimension of the MLP representations.
50
+ num_hidden_layers (`int`, *optional*, defaults to 52):
51
+ Number of hidden layers in the Transformer encoder.
52
+ hybrid_override_pattern (`str`, *optional*, defaults to `"M-M-M-M*-M-M-M-M-M*-M-M-M-M-M*-M-M-M-M-M*-M-M-M-M-M-"`):
53
+ The pattern of the hybrid model. The pattern is a string of characters where each character represents M: Mamba2, *: Attention, -: MLP
54
+ num_attention_heads (`int`, *optional*, defaults to 32):
55
+ Number of attention heads for each attention layer in the Transformer encoder.
56
+ head_dim (`int`, *optional*, defaults to 128):
57
+ Dimension of each attention head.
58
+ num_key_value_heads (`int`, *optional*, defaults to 8):
59
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
60
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
61
+ `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used.
62
+ mlp_hidden_act (`str`, *optional*, defaults to "relu2"):
63
+ The non-linear activation function in the MLP layers.
64
+ attention_bias (`bool`, *optional*, defaults to `False`):
65
+ Whether to use bias in attention layers.
66
+ mlp_bias (`bool`, *optional*, defaults to `False`):
67
+ Whether to use bias in MLP layers.
68
+ use_bias (`bool`, *optional*, defaults to `False`):
69
+ Whether to use bias in the model.
70
+ initializer_range (`float`, *optional*, defaults to 0.02):
71
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
72
+ layer_norm_epsilon (`float`, *optional*, defaults to 1e-5):
73
+ The epsilon used by the layer normalization layers.
74
+ residual_in_fp32 (`bool`, *optional*, defaults to `False`):
75
+ Whether or not residuals should be in `float32`. If set to `False` residuals will keep the same `dtype` as the rest of the model.
76
+ use_cache (`bool`, *optional*, defaults to `True`):
77
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
78
+ relevant if `config.is_decoder=True`.
79
+ num_logits_to_keep (`int` or `None`, *optional*, defaults to 1):
80
+ Number of prompt logits to calculate during generation. If `None`, all logits will be calculated. If an
81
+ integer value, only last `num_logits_to_keep` logits will be calculated.
82
+ pad_token_id (`int`, *optional*, defaults to 0):
83
+ The id of the padding token.
84
+ bos_token_id (`int`, *optional*, defaults to 1):
85
+ The id of the "beginning-of-sequence" token.
86
+ eos_token_id (`int`, *optional*, defaults to 2):
87
+ The id of the "end-of-sequence" token.
88
+ sliding_window (`int`, *optional*, defaults to None):
89
+ Sliding window attention window size.
90
+ max_position_embeddings (`int`, *optional*, defaults to 4096):
91
+ The maximum sequence length that this model might ever be used with.
92
+ attention_dropout (`float`, *optional*, defaults to 0.0):
93
+ The dropout ratio for the attention probabilities.
94
+ hidden_dropout (`float`, *optional*, defaults to 0.0):
95
+ The dropout ratio for the hidden states.
96
+ use_mamba_kernels (`bool`, *optional*, defaults to `True`):
97
+ Flag indicating whether or not to use the fast mamba kernels. These are available only if `mamba-ssm` and
98
+ `causal-conv1d` are installed, and the mamba modules are running on a CUDA device.
99
+ ssm_state_size (`int`, *optional*, defaults to 128):
100
+ The dimension of the mamba state space latents.
101
+ mamba_num_heads (`int`, *optional*, defaults to 128):
102
+ Number of heads in Mamba layers.
103
+ mamba_n_groups (`int`, *optional*, defaults to 8):
104
+ Number of groups in Mamba layers.
105
+ mamba_head_dim (`int`, *optional*, defaults to 64):
106
+ Dimension of each Mamba head.
107
+ mamba_d_conv (`int`, *optional*, defaults to 4):
108
+ The size of the mamba convolution kernel.
109
+ mamba_expand (`int`, *optional*, defaults to 2):
110
+ Expanding factor used to determine the mamba intermediate size.
111
+ mamba_hidden_act (`str`, *optional*, defaults to "silu"):
112
+ The non-linear activation function in the Mamba layers.
113
+ mamba_dt_min (`float`, *optional*, defaults to 0.001):
114
+ Minimum value for the time step in Mamba.
115
+ mamba_dt_max (`float`, *optional*, defaults to 0.1):
116
+ Maximum value for the time step in Mamba.
117
+ mamba_dt_limit (`tuple`, *optional*, defaults to (0.0, float("inf"))):
118
+ Limits for the time step in Mamba.
119
+ mamba_dt_init_floor (`float`, *optional*, defaults to 1e-4):
120
+ Floor value for time step initialization in Mamba.
121
+ mamba_conv_bias (`bool`, *optional*, defaults to `True`):
122
+ Whether to use bias in the convolution layer of the mamba mixer block.
123
+ mamba_proj_bias (`bool`, *optional*, defaults to `False`):
124
+ Whether to use bias in the input and output projections of the mamba mixer block.
125
+ mamba_chunk_size (`int`, *optional*, defaults to 256):
126
+ Size of chunks for Mamba processing.
127
+ rescale_prenorm_residual (`bool`, *optional*, defaults to `True`):
128
+ Whether to rescale the pre-normalization residual connections.
129
+ """
130
+
131
+ model_type = "nemotron_h"
132
+ keys_to_ignore_at_inference = ["past_key_values"]
133
+
134
+ def __init__(
135
+ self,
136
+ vocab_size=131072,
137
+ tie_word_embeddings=False,
138
+ hidden_size=4096,
139
+ intermediate_size=21504,
140
+ num_hidden_layers=52,
141
+ hybrid_override_pattern="M-M-M-M*-M-M-M-M-M*-M-M-M-M-M*-M-M-M-M-M*-M-M-M-M-M-",
142
+ num_attention_heads=32,
143
+ head_dim=128,
144
+ num_key_value_heads=8, # nemo: num_query_groups
145
+ mlp_hidden_act="relu2",
146
+ attention_bias=False,
147
+ mlp_bias=False,
148
+ use_bias=False,
149
+ initializer_range=0.02, # nemo: init_method_std
150
+ layer_norm_epsilon=1e-5, # nemo: layernorm_epsilon
151
+ residual_in_fp32=False, # Megatron Core default value
152
+ use_cache=True,
153
+ num_logits_to_keep=1,
154
+ pad_token_id=0,
155
+ bos_token_id=1,
156
+ eos_token_id=2,
157
+ sliding_window=None,
158
+ max_position_embeddings=4096,
159
+ attention_dropout=0.0,
160
+ hidden_dropout=0.0, # * ADDED
161
+ use_mamba_kernels=True,
162
+ ssm_state_size=128, # mamba_state_size
163
+ mamba_num_heads=128,
164
+ mamba_n_groups=8, # nemo: mamba_ssm_ngroups = num_heads
165
+ mamba_head_dim=64,
166
+ mamba_d_conv=4,
167
+ mamba_expand=2,
168
+ mamba_hidden_act="silu",
169
+ mamba_dt_min=0.001,
170
+ mamba_dt_max=0.1,
171
+ mamba_dt_limit=(0.0, float("inf")),
172
+ mamba_dt_init_floor=1e-4,
173
+ mamba_conv_bias=True,
174
+ mamba_proj_bias=False,
175
+ mamba_chunk_size=128,
176
+ rescale_prenorm_residual=True,
177
+ n_routed_experts=8,
178
+ n_shared_experts=1,
179
+ moe_intermediate_size=7688,
180
+ moe_shared_expert_intermediate_size=7688,
181
+ num_experts_per_tok=2,
182
+ routed_scaling_factor=1.0,
183
+ n_group=1,
184
+ topk_group=1,
185
+ norm_topk_prob=True,
186
+ **kwargs,
187
+ ):
188
+ self.vocab_size = vocab_size
189
+ self.tie_word_embeddings = tie_word_embeddings
190
+ self.hidden_size = hidden_size
191
+ self.intermediate_size = intermediate_size
192
+ self.num_hidden_layers = num_hidden_layers
193
+ self.hybrid_override_pattern = hybrid_override_pattern
194
+ self.num_attention_heads = num_attention_heads
195
+ self.head_dim = head_dim
196
+ self.sliding_window = sliding_window
197
+ self.max_position_embeddings = max_position_embeddings
198
+ self.attention_dropout = attention_dropout
199
+ self.hidden_dropout = hidden_dropout
200
+
201
+ # Validate hybrid_override_pattern
202
+ # M: Mamba2, *: Attention, -: MLP
203
+ assert len(self.hybrid_override_pattern) == self.num_hidden_layers, "hybrid_override_pattern must have the same length as num_hidden_layers"
204
+ assert re.match(r"^[*-M]+$", self.hybrid_override_pattern), "hybrid_override_pattern must only contain characters 'M', '*', or '-'"
205
+
206
+ # for backward compatibility
207
+ if num_key_value_heads is None:
208
+ num_key_value_heads = num_attention_heads
209
+
210
+ self.num_key_value_heads = num_key_value_heads
211
+ self.mlp_hidden_act = mlp_hidden_act
212
+ self.attention_bias = attention_bias
213
+ self.mlp_bias = mlp_bias
214
+ self.use_bias = use_bias
215
+ self.initializer_range = initializer_range
216
+ self.layer_norm_epsilon = layer_norm_epsilon
217
+ self.residual_in_fp32 = residual_in_fp32
218
+
219
+ self.use_cache = use_cache
220
+ self.num_logits_to_keep = num_logits_to_keep
221
+
222
+ self.use_mamba_kernels = use_mamba_kernels
223
+ self.n_groups = mamba_n_groups
224
+ self.mamba_head_dim = mamba_head_dim
225
+ self.ssm_state_size = ssm_state_size
226
+ self.mamba_num_heads = mamba_num_heads
227
+ self.conv_kernel = mamba_d_conv
228
+ self.expand = mamba_expand
229
+ self.mamba_hidden_act = mamba_hidden_act
230
+ self.time_step_min = mamba_dt_min
231
+ self.time_step_max = mamba_dt_max
232
+ self.time_step_limit = mamba_dt_limit
233
+ self.time_step_floor = mamba_dt_init_floor
234
+ self.use_conv_bias = mamba_conv_bias
235
+ self.mamba_proj_bias = mamba_proj_bias
236
+ self.chunk_size = mamba_chunk_size
237
+ self.rescale_prenorm_residual = rescale_prenorm_residual
238
+ self.n_routed_experts = n_routed_experts
239
+ self.n_shared_experts = n_shared_experts
240
+ self.moe_intermediate_size = moe_intermediate_size
241
+ self.moe_shared_expert_intermediate_size = moe_shared_expert_intermediate_size
242
+ self.num_experts_per_tok = num_experts_per_tok
243
+ self.routed_scaling_factor = routed_scaling_factor
244
+ self.n_group = n_group
245
+ self.topk_group = topk_group
246
+ self.norm_topk_prob = norm_topk_prob
247
+
248
+ super().__init__(
249
+ pad_token_id=pad_token_id,
250
+ bos_token_id=bos_token_id,
251
+ eos_token_id=eos_token_id,
252
+ tie_word_embeddings=tie_word_embeddings,
253
+ **kwargs,
254
+ )
255
+
256
+ @property
257
+ def layers_block_type(self):
258
+ return [
259
+ "mamba" if self.hybrid_override_pattern[i] == "M" else
260
+ "attention" if self.hybrid_override_pattern[i] == "*" else
261
+ "mlp" if self.hybrid_override_pattern[i] == "-" else "moe"
262
+ for i in range(self.num_hidden_layers)]
explainability.md ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ | Field | Response |
2
+ | :---- | :---- |
3
+ | Intended Task/Domain: | Text generation, reasoning, and chat |
4
+ | Model Type: | Text-to-text Mamba2-Transformer Hybrid |
5
+ | Intended Users: | Generative AI creators working with conversational AI models and image content. |
6
+ | Output: | Text |
7
+ | Tools used to evaluate datasets to identify synthetic data and ensure data authenticity. | We used a Gemma-3 4B-based filtering model fine-tuned on [Nemotron Content Safety Dataset v2](https://huggingface.co/datasets/nvidia/Aegis-AI-Content-Safety-Dataset-2.0) to ensure the quality of synthetic data. |
8
+ | Describe how the model works: | Generates text by predicting the next word or token based on the context provided in the input sequence using multiple self-attention layers. |
9
+ | Name the adversely impacted groups this has been tested to deliver comparable outcomes regardless of: | Not Applicable |
10
+ | Technical Limitations & Mitigation: | This model performs particularly well in instruction following regimes, as such may be strongly influenced by untrusted inputs and should be paired with appropriate guardrails and data filtering to better align use-case behaviors when exposed to such data. |
11
+ | Verified to have met prescribed NVIDIA quality standards: | Yes |
12
+ | Performance Metrics: | Accuracy, Throughput, and User-side throughput |
13
+ | Potential Known Risks: | The model was optimized explicitly for instruction following and as such may be influenced by untrusted inputs (prompt injection, indirect prompt injection, jailbreaking, web search, etc.) as a result of its instruction tuning that may degrade safety alignment and other training efforts. This model should be paired with additional guardrails and data filtering to limit exposure to instructions from malicious sources. Bypassing of safety alignment, system guardrails, and filters may allow harmful outcomes up to and including remote code execution in some agentic systems when effective security controls are not in place. The model was trained on data that contains toxic language and societal biases originally crawled from the internet. Therefore, the model may generate and amplify harmful, biased, or otherwise unsafe content reinforcing these biases and return toxic responses especially when prompted with toxic prompts. The model may also generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. The model may exhibit self-anthropomorphism (e.g., displaying human-like characteristics in dialogue, such as expressing preferences and emotions). In integrated system contexts, the model could potentially be exploited to access or disclose information beyond the model’s intended permissions or scope of operation.|
14
+ | Licensing: | GOVERNING TERMS: Use of this model is governed by the [NVIDIA Open Model License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/). |
generation_config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": [
5
+ 2,
6
+ 11
7
+ ],
8
+ "pad_token_id": 0,
9
+ "transformers_version": "4.57.1"
10
+ }
model-00001-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6a453e622cb76b6791a2f1c60aa7c8f97dce304d0e215e34b88e47bc937a68b
3
+ size 4991210024
model-00002-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f7362f9e3cfc3bd23c5c2c2f6f520914c6da6711e27eca221a396287bbb9e9b5
3
+ size 4992601016
model-00003-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6068ce876022895ca36b7ba553e11c381628c7cb8365d8dce7fa377278bf2b83
3
+ size 4992601368
model-00004-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:932dbd5beca60ced569b408d1dcab91765b23a4f88f2a82bc0aec3df354a14c9
3
+ size 4995692800
model-00005-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:57ec79cc6d9147831104e9c063a6c8ca61d6c7084a6960fdc617476465070ec7
3
+ size 4980553832
model-00006-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e93dcfee76290ec6f8d530e344b390067197bda2cb5151bc161df917888aeaa
3
+ size 4989423536
model-00007-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eca33a8f41bc9c43f5e267e25c76d667436c53dcabbb6ca2e95e4075570cc4a2
3
+ size 4992601488
model-00008-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:813a4811ee783e8be46e4cd4b49e185ee990f9dac40934cd3fd8b127eaca3ecd
3
+ size 4992601520
model-00009-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac57992ddfba2ae131aa666a7f59e7310d3ca81cde270953b2c846489a30c099
3
+ size 4995692800
model-00010-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a9a73f676d1ef6892cf039ee7d156c7678ea78e19275398497b9fcadb971fec0
3
+ size 4992601520
model-00011-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4153d01cdbe92c1b0d1cc5247ed028c38477d5cd662c4fc214c06265f73e4551
3
+ size 4995692800
model-00012-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:18e68fd3a50df00b240615acf4fd3f9a821a1aff70dfe41446b1c3b60859c12b
3
+ size 4995692816
model-00013-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bdf84d933727aadd1aa3519276ea9ef090e74cf857bfa447bcbe6efdcd404dbf
3
+ size 3249723504
model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff
 
modeling_nemotron_h.py ADDED
@@ -0,0 +1,1740 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 HuggingFace Inc. team.
3
+ # Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+ # limitations under the License.
16
+ """PyTorch NemotronH model."""
17
+
18
+ import math
19
+ from dataclasses import dataclass
20
+ from typing import Any, Dict, Optional, Tuple, Union
21
+
22
+ import torch
23
+ import torch.utils.checkpoint
24
+ from torch import nn
25
+ from torch.nn import CrossEntropyLoss
26
+ import torch.nn.functional as F
27
+
28
+ from transformers.activations import ACT2FN
29
+ from transformers.cache_utils import DynamicCache # we need __iter__ and __len__ of pkv
30
+ from transformers.generation import GenerationMixin
31
+ from transformers.modeling_attn_mask_utils import (
32
+ AttentionMaskConverter,
33
+ )
34
+ from transformers.modeling_utils import PreTrainedModel
35
+ from transformers.utils import (
36
+ ModelOutput,
37
+ add_code_sample_docstrings,
38
+ add_start_docstrings,
39
+ add_start_docstrings_to_model_forward,
40
+ logging,
41
+ )
42
+ from transformers.utils.import_utils import (
43
+ is_causal_conv1d_available,
44
+ is_flash_attn_2_available,
45
+ is_flash_attn_greater_or_equal_2_10,
46
+ is_mamba_2_ssm_available,
47
+ )
48
+ from .configuration_nemotron_h import NemotronHConfig
49
+
50
+
51
+ logger = logging.get_logger(__name__)
52
+
53
+
54
+ # Copied from transformers.models.mamba.modeling_mamba2.modeling_mamba2.py with MAMBA2->NEMOTRONH,Mamba2->NemotronH
55
+ # For Mamba2 components Mamba2->NemotronHMamba2
56
+ if is_mamba_2_ssm_available():
57
+ from mamba_ssm.ops.triton.selective_state_update import selective_state_update
58
+ from mamba_ssm.ops.triton.ssd_combined import mamba_chunk_scan_combined, mamba_split_conv1d_scan_combined
59
+ else:
60
+ mamba_chunk_scan_combined, mamba_split_conv1d_scan_combined, selective_state_update = None, None, None
61
+
62
+ try:
63
+ #from mamba_ssm.ops.triton.layernorm_gated import RMSNorm as RMSNormGated
64
+ from mamba_ssm.ops.triton.layernorm_gated import rmsnorm_fn
65
+ except ImportError:
66
+ raise ImportError("mamba-ssm is required by the Mamba model but cannot be imported")
67
+
68
+ if is_causal_conv1d_available():
69
+ from causal_conv1d import causal_conv1d_fn, causal_conv1d_update
70
+ else:
71
+ causal_conv1d_update, causal_conv1d_fn = None, None
72
+
73
+ if is_flash_attn_2_available():
74
+ from transformers.modeling_flash_attention_utils import _flash_attention_forward
75
+
76
+ is_fast_path_available = all(
77
+ (
78
+ selective_state_update,
79
+ mamba_chunk_scan_combined,
80
+ mamba_split_conv1d_scan_combined,
81
+ causal_conv1d_fn,
82
+ causal_conv1d_update,
83
+ )
84
+ )
85
+
86
+
87
+ _CHECKPOINT_FOR_DOC = "nvidia/Nemotron-H-56B-Base-8K"
88
+ _CONFIG_FOR_DOC = "NemotronHConfig"
89
+
90
+
91
+ # Helper methods for segment sum computation
92
+
93
+
94
+ def pad_tensor_by_size(input_tensor: torch.Tensor, pad_size: int):
95
+ """
96
+ Padding x tensor with `pad_size` on the seq_len dim (dim=1)
97
+
98
+ Assumes that we only have tensors of either size 4 or 3
99
+ """
100
+ pad_shape = (0, 0, 0, 0, 0, pad_size, 0, 0) if len(input_tensor.shape) == 4 else (0, 0, 0, pad_size, 0, 0)
101
+
102
+ return torch.nn.functional.pad(input_tensor, pad_shape, mode="constant", value=0)
103
+
104
+
105
+ def reshape_into_chunks(input_tensor, pad_size, chunk_size):
106
+ """
107
+ Padding input_tensor with `pad_size` on the seq_len dim (dim=1) and
108
+ simultaneously splitting it into chunk sequences.
109
+
110
+ Assumes that we only have tensors of either size 4 or 3
111
+ """
112
+ # [bsz, seq_len, ...] -> [bsz, seq_len multiple of chunk_size, ...]
113
+ input_tensor = pad_tensor_by_size(input_tensor, pad_size)
114
+
115
+ if len(input_tensor.shape) == 3:
116
+ # [bsz, seq_len multiple of chunk_size, num_heads] -> [bsz, -1, chunk_size, num_heads]
117
+ return input_tensor.reshape(input_tensor.shape[0], -1, chunk_size, input_tensor.shape[2])
118
+ else:
119
+ # [bsz, seq_len multiple of chunk_size, num_heads, head_dim or state_size] -> [bsz, -1, chunk_size, num_heads, head_dim or state_size]
120
+ return input_tensor.reshape(
121
+ input_tensor.shape[0], -1, chunk_size, input_tensor.shape[2], input_tensor.shape[3]
122
+ )
123
+
124
+
125
+ def segment_sum(input_tensor):
126
+ """
127
+ More stable segment sum calculation. Uses cumulative sums and masking instead of direct subtractions.
128
+ """
129
+ chunk_size = input_tensor.size(-1)
130
+ # 1. expand input tensor to have an additional dimension and repeat along that dimension
131
+ # [..., chunk_size] -> [..., chunk_size, chunk_size]
132
+ input_tensor = input_tensor[..., None].expand(*input_tensor.size(), chunk_size)
133
+ # 2. create a lower triangular mask with the diagonal set to 0 to 0 out elements above diag
134
+ mask = torch.tril(torch.ones(chunk_size, chunk_size, device=input_tensor.device, dtype=torch.bool), diagonal=-1)
135
+ input_tensor = input_tensor.masked_fill(~mask, 0)
136
+ # 3. compute actual cumsum
137
+ tensor_segsum = torch.cumsum(input_tensor, dim=-2)
138
+
139
+ # 4. apply mask to keep only the lower triangular part of the cumulative sum result (incl diagonal this time)
140
+ mask = torch.tril(torch.ones(chunk_size, chunk_size, device=input_tensor.device, dtype=torch.bool), diagonal=0)
141
+ tensor_segsum = tensor_segsum.masked_fill(~mask, -torch.inf)
142
+ return tensor_segsum
143
+
144
+
145
+ def apply_mask_to_padding_states(hidden_states, attention_mask):
146
+ """
147
+ Tunes out the hidden states for padding tokens, see https://github.com/state-spaces/mamba/issues/66
148
+ """
149
+ if attention_mask is not None and attention_mask.shape[1] > 1 and attention_mask.shape[0] > 1:
150
+ dtype = hidden_states.dtype
151
+ hidden_states = (hidden_states * attention_mask[:, :, None]).to(dtype)
152
+
153
+ return hidden_states
154
+
155
+ # Copied from https://github.com/huggingface/transformers/blob/main/src/transformers/models/jamba/modeling_jamba.py
156
+ class HybridMambaAttentionDynamicCache(DynamicCache):
157
+ """
158
+ A dynamic cache that can handle both the attention cache (which has a seq_len dimension) and the mamba cache
159
+ (which has a constant shape regardless of seq_len).
160
+
161
+ This cache has two sets of lists of tensors: `key_cache` and `value_cache` for attention cache and `conv_states`
162
+ and `ssm_states` for mamba cache. Each of these lists has `num_layers` tensors. The expected shape for each tensor
163
+ For attention layers, `key_cache` and `value_cache` have a shape of `(batch_size, num_heads, seq_len, head_dim)`,
164
+ while `conv_states` and `ssm_states` have a shape of `(batch_size, 0)` (empty tensors).
165
+ For mamba layers, `key_cache` and `value_cache` have a shape of `(batch_size, 0)` (empty tensors),
166
+ while `conv_states` represents the convolution state and has a shape of `(batch_size, d_inner, d_conv)`,
167
+ and `ssm_states` represents the ssm state and has a shape of `(batch_size, d_inner, d_state)`.
168
+ """
169
+
170
+ def __init__(self, config, batch_size, dtype=torch.float16, device=None):
171
+ super().__init__()
172
+ self.dtype = dtype
173
+ self.hybrid_override_pattern = config.hybrid_override_pattern
174
+ self.has_previous_state = False # only used by mamba
175
+ intermediate_size = config.mamba_num_heads * config.mamba_head_dim
176
+ ssm_state_size = config.ssm_state_size
177
+ conv_kernel_size = config.conv_kernel
178
+ self.conv_states = []
179
+ self.ssm_states = []
180
+ self.transformer_layers = []
181
+ for i in range(config.num_hidden_layers):
182
+ if self.hybrid_override_pattern[i] == "M":
183
+ # Mamba layer
184
+ self.conv_states += [
185
+ torch.zeros(batch_size, intermediate_size, conv_kernel_size, device=device, dtype=dtype)
186
+ ]
187
+ self.ssm_states += [
188
+ torch.zeros(batch_size, intermediate_size, ssm_state_size, device=device, dtype=dtype)
189
+ ]
190
+ else:
191
+ # Attention or MLP layer
192
+ self.conv_states += [torch.tensor([[]] * batch_size, device=device)]
193
+ self.ssm_states += [torch.tensor([[]] * batch_size, device=device)]
194
+ self.transformer_layers.append(i)
195
+
196
+ self.key_cache = [torch.tensor([[]] * batch_size, device=device) for _ in range(config.num_hidden_layers)]
197
+ self.value_cache = [torch.tensor([[]] * batch_size, device=device) for _ in range(config.num_hidden_layers)]
198
+
199
+ def update(
200
+ self,
201
+ key_states: torch.Tensor,
202
+ value_states: torch.Tensor,
203
+ layer_idx: int,
204
+ cache_kwargs: Optional[Dict[str, Any]] = None,
205
+ ) -> Tuple[torch.Tensor, torch.Tensor]:
206
+ # Update the cache
207
+ if self.key_cache[layer_idx].shape[-1] == 0:
208
+ self.key_cache[layer_idx] = key_states
209
+ self.value_cache[layer_idx] = value_states
210
+ else:
211
+ self.key_cache[layer_idx] = torch.cat([self.key_cache[layer_idx], key_states], dim=2)
212
+ self.value_cache[layer_idx] = torch.cat([self.value_cache[layer_idx], value_states], dim=2)
213
+
214
+ return self.key_cache[layer_idx], self.value_cache[layer_idx]
215
+
216
+ def reorder_cache(self, beam_idx: torch.LongTensor):
217
+ """Reorders the cache for beam search, given the selected beam indices."""
218
+ for layer_idx in range(len(self.key_cache)):
219
+ device = self.key_cache[layer_idx].device
220
+ self.key_cache[layer_idx] = self.key_cache[layer_idx].index_select(0, beam_idx.to(device))
221
+ device = self.value_cache[layer_idx].device
222
+ self.value_cache[layer_idx] = self.value_cache[layer_idx].index_select(0, beam_idx.to(device))
223
+
224
+ device = self.conv_states[layer_idx].device
225
+ self.conv_states[layer_idx] = self.conv_states[layer_idx].index_select(0, beam_idx.to(device))
226
+ device = self.ssm_states[layer_idx].device
227
+ self.ssm_states[layer_idx] = self.ssm_states[layer_idx].index_select(0, beam_idx.to(device))
228
+
229
+ def get_seq_length(self, layer_idx: Optional[int] = 0) -> int:
230
+ """Returns the sequence length of the cached states. A layer index can be optionally passed."""
231
+ # take any layer that contains cache and not empty tensor
232
+ layer_idx = self.transformer_layers[0] if layer_idx not in self.transformer_layers else layer_idx
233
+ if len(self.key_cache) <= layer_idx:
234
+ return 0
235
+ return self.key_cache[layer_idx].shape[-2]
236
+
237
+ def to_legacy_cache(self) -> Tuple[Tuple[torch.Tensor], Tuple[torch.Tensor]]:
238
+ raise NotImplementedError("HybridMambaAttentionDynamicCache does not have a legacy cache equivalent.")
239
+
240
+ @classmethod
241
+ def from_legacy_cache(cls, past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None) -> "DynamicCache":
242
+ raise NotImplementedError("HybridMambaAttentionDynamicCache does not have a legacy cache equivalent.")
243
+
244
+ # Copied from modeling_mamba2.py
245
+ def update_conv_state(
246
+ self, layer_idx: int, new_conv_state: torch.Tensor, cache_init: bool = False
247
+ ) -> torch.Tensor:
248
+ if cache_init:
249
+ self.conv_states[layer_idx] = new_conv_state.to(self.conv_states.device)
250
+ else:
251
+ self.conv_states[layer_idx] = self.conv_states[layer_idx].roll(shifts=-1, dims=-1)
252
+ self.conv_states[layer_idx][:, :, -1] = new_conv_state[:, 0, :].to(self.conv_states.device)
253
+ return self.conv_states[layer_idx]
254
+
255
+ def update_ssm_state(self, layer_idx: int, new_ssm_state: torch.Tensor):
256
+ self.ssm_states[layer_idx] = new_ssm_state.to(self.ssm_states.device)
257
+ return self.ssm_states[layer_idx]
258
+
259
+ def reset(self):
260
+ self.conv_states.zero_()
261
+ self.ssm_states.zero_()
262
+
263
+ class MambaRMSNormGated(torch.nn.Module):
264
+ def __init__(self, hidden_size, group_size, eps=1e-5):
265
+ super().__init__()
266
+ self.weight = nn.Parameter(torch.ones(hidden_size))
267
+ self.variance_epsilon = eps
268
+ self.group_size = group_size
269
+
270
+ # jan28b version
271
+ def forward(self, hidden_states, gate=None):
272
+ return rmsnorm_fn(x=hidden_states,
273
+ weight=self.weight,
274
+ bias=None, # No bias
275
+ z=gate,
276
+ eps=self.variance_epsilon,
277
+ group_size=self.group_size,
278
+ norm_before_gate=False
279
+ )
280
+
281
+ class NemotronHMamba2Mixer(nn.Module):
282
+ """
283
+ Compute ∆, A, B, C, and D the state space parameters and compute the `contextualized_states`.
284
+ A, D are input independent (see Mamba paper [1] Section 3.5.2 "Interpretation of A" for why A isn't selective)
285
+ ∆, B, C are input-dependent (this is a key difference between Mamba and the linear time invariant S4,
286
+ and is why Mamba is called **selective** state spaces)
287
+ """
288
+
289
+ def __init__(self, config: NemotronHConfig, layer_idx: int):
290
+ super().__init__()
291
+ self.num_heads = config.mamba_num_heads
292
+ self.hidden_size = config.hidden_size
293
+ self.ssm_state_size = config.ssm_state_size
294
+ self.conv_kernel_size = config.conv_kernel
295
+ self.intermediate_size = config.mamba_num_heads * config.mamba_head_dim
296
+ self.layer_idx = layer_idx
297
+ self.use_conv_bias = config.use_conv_bias
298
+ self.activation = config.mamba_hidden_act
299
+ self.act = ACT2FN[config.mamba_hidden_act]
300
+
301
+ self.layer_norm_epsilon = config.layer_norm_epsilon
302
+
303
+ self.n_groups = config.n_groups
304
+ self.head_dim = config.mamba_head_dim
305
+ self.chunk_size = config.chunk_size
306
+
307
+ self.time_step_limit = config.time_step_limit
308
+ self.time_step_min = config.time_step_min
309
+ self.time_step_max = config.time_step_max
310
+
311
+ self.conv_dim = self.intermediate_size + 2 * self.n_groups * self.ssm_state_size
312
+ self.conv1d = nn.Conv1d(
313
+ in_channels=self.conv_dim,
314
+ out_channels=self.conv_dim,
315
+ bias=config.use_conv_bias,
316
+ kernel_size=config.conv_kernel,
317
+ groups=self.conv_dim,
318
+ padding=config.conv_kernel - 1,
319
+ )
320
+
321
+ # projection of the input hidden states
322
+ projection_size = self.intermediate_size + self.conv_dim + self.num_heads
323
+ self.in_proj = nn.Linear(
324
+ self.hidden_size,
325
+ projection_size,
326
+ bias=config.use_bias,
327
+ )
328
+ # selective projection used to make dt, B and C input dependant
329
+
330
+ # time step projection (discretization)
331
+ # instantiate once and copy inv_dt in init_weights of PretrainedModel
332
+ self.dt_bias = nn.Parameter(torch.ones(self.num_heads))
333
+
334
+ # S4D real initialization. These are not discretized!
335
+ # The core is to load them, compute the discrete states, then write the updated state. Keeps the memory bounded
336
+ A = torch.arange(1, self.num_heads + 1)
337
+ self.A_log = nn.Parameter(torch.log(A))
338
+ self.A_log._no_weight_decay = True
339
+ self.norm = MambaRMSNormGated(self.intermediate_size, eps=self.layer_norm_epsilon, group_size=self.intermediate_size // self.n_groups)
340
+ self.D = nn.Parameter(torch.ones(self.num_heads))
341
+ self.D._no_weight_decay = True
342
+
343
+ self.out_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=config.use_bias)
344
+ self.use_bias = config.use_bias
345
+
346
+ if not is_fast_path_available:
347
+ logger.warning_once(
348
+ "The fast path is not available because on of `(selective_state_update, causal_conv1d_fn, causal_conv1d_update)`"
349
+ " is None. Falling back to the naive implementation. To install follow https://github.com/state-spaces/mamba/#installation and"
350
+ " https://github.com/Dao-AILab/causal-conv1d"
351
+ )
352
+
353
+ def cuda_kernels_forward(
354
+ self,
355
+ hidden_states: torch.Tensor,
356
+ cache_params: Optional[HybridMambaAttentionDynamicCache] = None,
357
+ cache_position: Optional[torch.LongTensor] = None,
358
+ attention_mask: Optional[torch.Tensor] = None,
359
+ ):
360
+ # 1. Gated MLP's linear projection
361
+ hidden_states = apply_mask_to_padding_states(hidden_states, attention_mask)
362
+ projected_states = self.in_proj(hidden_states)
363
+
364
+ # Set up dimensions for reshapes later
365
+ batch_size, seq_len, _ = hidden_states.shape
366
+ groups_time_state_size = self.n_groups * self.ssm_state_size
367
+ d_mlp = (
368
+ projected_states.shape[-1]
369
+ - 2 * self.intermediate_size
370
+ - 2 * self.n_groups * self.ssm_state_size
371
+ - self.num_heads
372
+ ) // 2
373
+
374
+ # Single step calculations via cache
375
+ if cache_params is not None and cache_position is not None and cache_position[0] > 0:
376
+ _, _, gate, hidden_states_B_C, dt = projected_states.squeeze(1).split(
377
+ [d_mlp, d_mlp, self.intermediate_size, self.conv_dim, self.num_heads], dim=-1
378
+ )
379
+
380
+ # 2. Convolution sequence transformation
381
+ hidden_states_B_C = causal_conv1d_update(
382
+ hidden_states_B_C,
383
+ cache_params.conv_states[self.layer_idx],
384
+ self.conv1d.weight.squeeze(1),
385
+ self.conv1d.bias,
386
+ self.activation,
387
+ )
388
+
389
+ hidden_states, B, C = torch.split(
390
+ hidden_states_B_C,
391
+ [self.intermediate_size, groups_time_state_size, groups_time_state_size],
392
+ dim=-1,
393
+ )
394
+
395
+ # 3. SSM transformation
396
+ A = -torch.exp(self.A_log.float()) # (nheads,)
397
+ A = A[:, None, ...][:, :, None].expand(-1, self.head_dim, self.ssm_state_size).to(dtype=torch.float32)
398
+ dt = dt[:, :, None].expand(-1, -1, self.head_dim)
399
+ dt_bias = self.dt_bias[:, None, ...].expand(-1, self.head_dim)
400
+ D = self.D[:, None, ...].expand(-1, self.head_dim)
401
+ B = B.view(batch_size, self.n_groups, B.shape[1] // self.n_groups)
402
+ C = C.view(batch_size, self.n_groups, C.shape[1] // self.n_groups)
403
+ hidden_states_reshaped = hidden_states.view(batch_size, self.num_heads, self.head_dim)
404
+ hidden_states = selective_state_update(
405
+ cache_params.ssm_states[self.layer_idx],
406
+ hidden_states_reshaped,
407
+ dt,
408
+ A,
409
+ B,
410
+ C,
411
+ D,
412
+ z=None,
413
+ dt_bias=dt_bias,
414
+ dt_softplus=True,
415
+ )
416
+ hidden_states = hidden_states.view(batch_size, self.num_heads * self.head_dim)
417
+ hidden_states = self.norm(hidden_states, gate)
418
+
419
+ # 4. Final linear projection
420
+ out = self.out_proj(hidden_states)[:, None, ...]
421
+
422
+ # Fused calculations or step by step if no initialized cache is found
423
+ else:
424
+ A = -torch.exp(self.A_log.float()) # (num_heads) or (intermediate_size, state_size)
425
+ dt_limit_kwargs = {} if self.time_step_limit == (0.0, float("inf")) else {"dt_limit": self.time_step_limit}
426
+
427
+ # 2-4. Fused kernel for conv1d, SSM, and the final projection
428
+ if self.training and cache_params is None:
429
+ out = mamba_split_conv1d_scan_combined(
430
+ projected_states,
431
+ self.conv1d.weight.squeeze(1),
432
+ self.conv1d.bias,
433
+ self.dt_bias,
434
+ A,
435
+ D=self.D,
436
+ chunk_size=self.chunk_size,
437
+ seq_idx=None, # was seq_idx
438
+ activation=self.activation,
439
+ rmsnorm_weight=self.norm.weight,
440
+ rmsnorm_eps=self.norm.variance_epsilon,
441
+ outproj_weight=self.out_proj.weight,
442
+ outproj_bias=self.out_proj.bias,
443
+ headdim=self.head_dim,
444
+ ngroups=self.n_groups,
445
+ norm_before_gate=False,
446
+ return_final_states=False,
447
+ **dt_limit_kwargs,
448
+ )
449
+
450
+ else:
451
+ _, _, gate, hidden_states_B_C, dt = projected_states.split(
452
+ [d_mlp, d_mlp, self.intermediate_size, self.conv_dim, self.num_heads], dim=-1
453
+ )
454
+
455
+ # 2. Convolution sequence transformation
456
+ # Init cache
457
+ if cache_params is not None:
458
+ hidden_states_B_C_transposed = hidden_states_B_C.transpose(1, 2)
459
+ conv_states = nn.functional.pad(
460
+ hidden_states_B_C_transposed,
461
+ (cache_params.conv_kernel_size - hidden_states_B_C_transposed.shape[-1], 0),
462
+ )
463
+ cache_params.update_conv_state(
464
+ layer_idx=self.layer_idx, new_conv_state=conv_states, cache_init=True
465
+ )
466
+
467
+ if self.activation not in ["silu", "swish"]:
468
+ hidden_states_B_C = self.act(
469
+ self.conv1d(hidden_states_B_C.transpose(1, 2))[..., :seq_len].transpose(1, 2)
470
+ )
471
+ else:
472
+ hidden_states_B_C = causal_conv1d_fn(
473
+ x=hidden_states_B_C.transpose(1, 2),
474
+ weight=self.conv1d.weight.squeeze(1),
475
+ bias=self.conv1d.bias,
476
+ activation=self.activation,
477
+ ).transpose(1, 2)
478
+ hidden_states_B_C = apply_mask_to_padding_states(hidden_states_B_C, attention_mask)
479
+ hidden_states, B, C = torch.split(
480
+ hidden_states_B_C,
481
+ [self.intermediate_size, groups_time_state_size, groups_time_state_size],
482
+ dim=-1,
483
+ )
484
+
485
+ # 3. SSM transformation
486
+ scan_output, ssm_state = mamba_chunk_scan_combined(
487
+ hidden_states.view(batch_size, seq_len, -1, self.head_dim),
488
+ dt,
489
+ A,
490
+ B.view(batch_size, seq_len, self.n_groups, -1),
491
+ C.view(batch_size, seq_len, self.n_groups, -1),
492
+ chunk_size=self.chunk_size,
493
+ D=self.D,
494
+ z=None,
495
+ seq_idx=None,
496
+ return_final_states=True,
497
+ dt_bias=self.dt_bias,
498
+ dt_softplus=True,
499
+ **dt_limit_kwargs,
500
+ )
501
+
502
+ # Init cache
503
+ if ssm_state is not None and cache_params is not None:
504
+ cache_params.update_ssm_state(layer_idx=self.layer_idx, new_ssm_state=ssm_state)
505
+
506
+ scan_output = scan_output.view(batch_size, seq_len, -1)
507
+
508
+ # Multiply "gate" branch and apply extra normalization layer
509
+ scan_output = self.norm(scan_output, gate)
510
+
511
+ # 4. Final linear projection
512
+ out = self.out_proj(scan_output)
513
+ return out
514
+
515
+ # fmt: off
516
+ def torch_forward(self, input_states, cache_params: Optional[HybridMambaAttentionDynamicCache]=None, cache_position:Optional[torch.LongTensor]=None, attention_mask: Optional[torch.Tensor]=None):
517
+ batch_size, seq_len, _ = input_states.shape
518
+ dtype = input_states.dtype
519
+
520
+ # 1. Gated MLP's linear projection
521
+ input_states = apply_mask_to_padding_states(input_states, attention_mask)
522
+ projected_states = self.in_proj(input_states)
523
+ d_mlp = (projected_states.shape[-1] - 2 * self.intermediate_size - 2 * self.n_groups * self.ssm_state_size-self.num_heads) // 2
524
+ _, _, gate, hidden_states_B_C, dt = projected_states.split(
525
+ [d_mlp, d_mlp, self.intermediate_size, self.conv_dim, self.num_heads], dim=-1
526
+ )
527
+
528
+ # 2. Convolution sequence transformation
529
+ if cache_params is not None and cache_position is not None and cache_position[0] > 0:
530
+ cache_params.update_conv_state(layer_idx=self.layer_idx, new_conv_state=hidden_states_B_C, cache_init=False)
531
+
532
+ # We need to guarantee that anything regarding the cache is on the same device
533
+ conv_states = cache_params.conv_states[self.layer_idx].to(device=self.conv1d.weight.device)
534
+
535
+ hidden_states_B_C = torch.sum(
536
+ conv_states * self.conv1d.weight.squeeze(1), dim=-1
537
+ )
538
+ if self.use_conv_bias:
539
+ hidden_states_B_C = hidden_states_B_C + self.conv1d.bias
540
+ hidden_states_B_C = self.act(hidden_states_B_C)
541
+ else:
542
+ # Init cache
543
+ if cache_params is not None:
544
+ hidden_states_B_C_transposed = hidden_states_B_C.transpose(1, 2)
545
+ conv_states = nn.functional.pad(
546
+ hidden_states_B_C_transposed, (cache_params.conv_kernel_size - hidden_states_B_C_transposed.shape[-1], 0)
547
+ )
548
+ cache_params.update_conv_state(layer_idx=self.layer_idx, new_conv_state=conv_states, cache_init=True)
549
+
550
+ hidden_states_B_C = self.act(self.conv1d(hidden_states_B_C.transpose(1, 2))[..., :seq_len].transpose(1, 2))
551
+
552
+ hidden_states_B_C = apply_mask_to_padding_states(hidden_states_B_C, attention_mask)
553
+ hidden_states, B, C = torch.split(
554
+ hidden_states_B_C,
555
+ [self.intermediate_size, self.n_groups * self.ssm_state_size, self.n_groups * self.ssm_state_size],
556
+ dim=-1
557
+ )
558
+
559
+ # 3. SSM transformation
560
+ A = -torch.exp(self.A_log.float()) # [num_heads]
561
+ if cache_params is not None and cache_position is not None and cache_position[0] > 0:
562
+ # We need to guarantee that anything regarding the cache is on the same device
563
+ cache_device = cache_params.ssm_states.device
564
+
565
+ # Note: there is no need to pad parameter matrices here, as there is just one new token
566
+ # for batched generation
567
+ dt = dt[:, 0, :][:, None, ...]
568
+ dt = dt.transpose(1, 2).expand(batch_size, dt.shape[-1], self.head_dim)
569
+ # [num_heads] -> [num_heads, head_dim]
570
+ dt_bias = self.dt_bias[..., None].expand(self.dt_bias.shape[0], self.head_dim)
571
+
572
+ dt = torch.nn.functional.softplus(dt + dt_bias.to(dt.dtype))
573
+ dt = torch.clamp(dt, self.time_step_limit[0], self.time_step_limit[1])
574
+ A = A[..., None, None].expand(self.num_heads, self.head_dim, self.ssm_state_size).to(dtype=torch.float32)
575
+ # [bsz, num_heads, head_dim, state_size]
576
+ dA = (torch.exp(dt[..., None] * A)).to(device=cache_device)
577
+
578
+ # Discretize B
579
+ # [bsz, n_groups * state_size] -> [bsz, n_groups, 1, state_size] ->
580
+ # -> [bsz, n_groups, group to head repetition factor, state_size] -> [bsz, num_heads, state_size]
581
+ B = B.reshape(batch_size, self.n_groups, -1)[..., None, :]
582
+ B = B.expand(batch_size, self.n_groups, self.num_heads // self.n_groups, B.shape[-1]).contiguous()
583
+ B = B.reshape(batch_size, -1, B.shape[-1])
584
+ # [bsz, num_heads, head_dim, state_size]
585
+ dB = dt[..., None] * B[..., None, :]
586
+
587
+ # Discretize x into dB
588
+ # [bsz, intermediate_size] -> [bsz, num_heads, head_dim]
589
+ hidden_states = hidden_states.reshape(batch_size, -1, self.head_dim)
590
+ dBx = (dB * hidden_states[..., None]).to(device=cache_device)
591
+
592
+ # State calculation
593
+ cache_params.update_ssm_state(
594
+ layer_idx=self.layer_idx,
595
+ new_ssm_state=cache_params.ssm_states[self.layer_idx] * dA + dBx
596
+ )
597
+
598
+ # Subsequent output
599
+ # [bsz, n_groups * state_size] -> [bsz, num_heads, state_size]
600
+ C = C.reshape(batch_size, self.n_groups, -1)[..., None, :]
601
+ C = C.expand(batch_size, self.n_groups, self.num_heads // self.n_groups, C.shape[-1]).contiguous()
602
+ C = C.reshape(batch_size, -1, C.shape[-1])
603
+ # [bsz, num_heads, head_dim]
604
+
605
+ ssm_states = cache_params.ssm_states[self.layer_idx].to(device=C.device, dtype=C.dtype) # Shape: [b, h, d, n]
606
+ # Reshape ssm_states to merge the first two dimensions
607
+ ssm_states_reshaped = ssm_states.view(batch_size * self.num_heads, self.head_dim, self.ssm_state_size) # Shape: [b*h, d, n]
608
+ C_reshaped = C.view(batch_size * self.num_heads, self.ssm_state_size, 1) # Shape: [b*h, n, 1]
609
+ y = torch.bmm(ssm_states_reshaped, C_reshaped)
610
+ y = y.view(batch_size, self.num_heads, self.head_dim)
611
+
612
+ # D skip connection
613
+ # [num_heads] -> [num_heads, head_dim]
614
+ D = self.D[..., None].expand(self.D.shape[0], self.head_dim)
615
+ y = (y + hidden_states * D).to(y.dtype)
616
+
617
+ # [bsz, num_heads, head_dim] -> [bsz, 1, intermediate_size]
618
+ y = y.reshape(batch_size, -1)[:, None, ...]
619
+ else:
620
+ # begin ssd naive implementation without einsums
621
+ dt = nn.functional.softplus(dt + self.dt_bias)
622
+ dt = torch.clamp(dt, self.time_step_limit[0], self.time_step_limit[1])
623
+ hidden_states = hidden_states.reshape(batch_size, seq_len, -1, self.head_dim).float()
624
+ B = B.reshape(batch_size, seq_len, -1, self.ssm_state_size).float()
625
+ C = C.reshape(batch_size, seq_len, -1, self.ssm_state_size).float()
626
+ B = B.repeat_interleave(self.num_heads // self.n_groups, dim=2, output_size=self.num_heads)
627
+ C = C.repeat_interleave(self.num_heads // self.n_groups, dim=2, output_size=self.num_heads)
628
+ pad_size = (self.chunk_size - seq_len % self.chunk_size) % self.chunk_size
629
+
630
+ D_residual = self.D[..., None] * pad_tensor_by_size(hidden_states, pad_size)
631
+
632
+ # Discretize x and A
633
+ hidden_states = hidden_states * dt[..., None]
634
+ A = A.to(hidden_states.dtype) * dt
635
+
636
+ # Rearrange into blocks/chunks
637
+ hidden_states, A, B, C = [reshape_into_chunks(t, pad_size, self.chunk_size) for t in (hidden_states, A, B, C)]
638
+
639
+ # [bsz, -1, chunk_size, num_heads] -> [bsz, num_heads, -1, chunk_size]
640
+ A = A.permute(0, 3, 1, 2)
641
+ A_cumsum = torch.cumsum(A, dim=-1)
642
+
643
+ # 1. Compute the output for each intra-chunk (diagonal blocks)
644
+ # This is the analog of a causal mask
645
+ L = torch.exp(segment_sum(A))
646
+
647
+ # Contraction of C and B to get G (attention-weights like)
648
+ G_intermediate = C[:, :, :, None, :, :] * B[:, :, None, :, :, :] # shape: (b, c, l, s, h, n)
649
+ G = G_intermediate.sum(dim=-1) # shape: (b, c, l, s, h)
650
+
651
+ # Compute M, equivalent to applying attention mask to weights
652
+ M_intermediate = G[..., None] * L.permute(0, 2, 3, 4, 1)[..., None]
653
+ M = M_intermediate.sum(dim=-1)
654
+
655
+ # Compute Y_diag (apply to values)
656
+ Y_diag = (M[..., None] * hidden_states[:, :, None]).sum(dim=3)
657
+
658
+ # 2. Compute the state for each intra-chunk
659
+ # (right term of low-rank factorization of off-diagonal blocks; B terms)
660
+ decay_states = torch.exp((A_cumsum[:, :, :, -1:] - A_cumsum))
661
+ B_decay = B * decay_states.permute(0, -2, -1, 1)[..., None]
662
+ states = (B_decay[..., None, :] * hidden_states[..., None]).sum(dim=2)
663
+
664
+ # 3. Compute the inter-chunk SSM recurrence; produces correct SSM states at chunk boundaries
665
+ # (middle term of factorization of off-diag blocks; A terms)
666
+ if cache_params is not None and cache_position is not None and cache_position[0] > 0:
667
+ previous_states = cache_params.ssm_states[self.layer_idx][:, None, ...].to(device=states.device)
668
+ else:
669
+ previous_states = torch.zeros_like(states[:, :1])
670
+ states = torch.cat([previous_states, states], dim=1)
671
+ decay_chunk = torch.exp(segment_sum(nn.functional.pad(A_cumsum[:, :, :, -1], (1, 0))))
672
+ decay_chunk = decay_chunk.transpose(1, 3)
673
+ new_states = (decay_chunk[..., None, None] * states[:, :, None, ...]).sum(dim=1)
674
+ states, ssm_state = new_states[:, :-1], new_states[:, -1]
675
+
676
+ # 4. Compute state -> output conversion per chunk
677
+ # (left term of low-rank factorization of off-diagonal blocks; C terms)
678
+ state_decay_out = torch.exp(A_cumsum)
679
+ C_times_states = (C[..., None, :] * states[:, :, None, ...])
680
+ state_decay_out_permuted = state_decay_out.permute(0, 2, 3, 1)
681
+ Y_off = (C_times_states.sum(-1) * state_decay_out_permuted[..., None])
682
+
683
+ # Add output of intra-chunk and inter-chunk terms (diagonal and off-diagonal blocks)
684
+ y = Y_diag + Y_off
685
+ # [bsz, -1, self.chunk_size, num_heads, head_dim] -> [bsz, (padded) seq_len, num_heads, head_dim]
686
+ y = y.reshape(batch_size, -1, self.num_heads, self.head_dim)
687
+
688
+ y = y + D_residual
689
+ # Cutting off padded chunks
690
+ if pad_size > 0:
691
+ y = y[:, :seq_len, :, :]
692
+ y = y.reshape(batch_size, seq_len, -1)
693
+
694
+ # Init cache
695
+ if ssm_state is not None and cache_params is not None:
696
+ cache_params.update_ssm_state(layer_idx=self.layer_idx, new_ssm_state=ssm_state)
697
+
698
+ scan_output = self.norm(y, gate)
699
+
700
+ # end ssd naive
701
+
702
+ # 4. Final linear projection
703
+ contextualized_states = self.out_proj(scan_output.to(dtype)) # [batch, seq_len, hidden_size]
704
+ return contextualized_states
705
+ # fmt: on
706
+
707
+ def forward(
708
+ self,
709
+ hidden_states,
710
+ cache_params: Optional[HybridMambaAttentionDynamicCache] = None,
711
+ cache_position: Optional[torch.LongTensor] = None,
712
+ attention_mask: Optional[torch.Tensor] = None,
713
+ ):
714
+ if is_fast_path_available and "cuda" in self.in_proj.weight.device.type:
715
+ return self.cuda_kernels_forward(hidden_states, cache_params, cache_position, attention_mask)
716
+ dtype = hidden_states.dtype
717
+ if attention_mask is not None and attention_mask.shape[1] > 1 and attention_mask.shape[0] > 1:
718
+ # tune out hidden states for pad tokens, see https://github.com/state-spaces/mamba/issues/66
719
+ hidden_states = (hidden_states * attention_mask[:, :, None]).to(dtype)
720
+
721
+ return self.torch_forward(hidden_states, cache_params, cache_position, attention_mask)
722
+
723
+
724
+ class NemotronHRMSNorm(nn.Module):
725
+ def __init__(self, hidden_size, eps=1e-6):
726
+ """
727
+ NemotronHRMSNorm is equivalent to T5LayerNorm and LlamaRMSNorm
728
+ """
729
+ super().__init__()
730
+ self.weight = nn.Parameter(torch.ones(hidden_size))
731
+ self.variance_epsilon = eps
732
+
733
+ def forward(self, hidden_states):
734
+ input_dtype = hidden_states.dtype
735
+ hidden_states = hidden_states.to(torch.float32)
736
+ variance = hidden_states.pow(2).mean(-1, keepdim=True)
737
+ hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
738
+ # Weights are in float32
739
+ return (self.weight.to(torch.float32) * hidden_states).to(input_dtype)
740
+
741
+ class NemotronHBlock(nn.Module):
742
+ def __init__(self, config, layer_idx):
743
+ super().__init__()
744
+ self.config = config
745
+ self.layer_idx = layer_idx
746
+ self.residual_in_fp32 = config.residual_in_fp32
747
+ self.norm = NemotronHRMSNorm(config.hidden_size, eps=config.layer_norm_epsilon)
748
+
749
+ # M: Mamba2, *: Attention, -: MLP
750
+ self.block_type = config.layers_block_type[layer_idx]
751
+ if self.block_type == "mamba":
752
+ self.mixer = NemotronHMamba2Mixer(config, layer_idx=layer_idx)
753
+ elif self.block_type == "attention":
754
+ self.mixer = NEMOTRONH_ATTENTION_CLASSES[config._attn_implementation](config, layer_idx=layer_idx)
755
+ elif self.block_type == "mlp":
756
+ self.mixer = NemotronHMLP(config, layer_idx=layer_idx)
757
+ elif self.block_type == "moe":
758
+ self.mixer = NemotronHMOE(config, layer_idx=layer_idx)
759
+ else:
760
+ raise ValueError(f"Invalid layer pattern {config.hybrid_override_pattern[layer_idx]}")
761
+
762
+ def forward(
763
+ self,
764
+ hidden_states,
765
+ cache_params: Optional[HybridMambaAttentionDynamicCache] = None,
766
+ cache_position: Optional[torch.LongTensor] = None,
767
+ attention_mask: Optional[torch.Tensor] = None,
768
+ ):
769
+ with torch.cuda.stream(torch.cuda.default_stream(hidden_states.device)):
770
+ # * Use torch.cuda.stream() to avoid NaN issues when using multiple GPUs
771
+ residual = hidden_states
772
+ hidden_states = self.norm(hidden_states.to(dtype=self.norm.weight.dtype))
773
+ if self.residual_in_fp32:
774
+ residual = residual.to(torch.float32)
775
+
776
+ if self.block_type == "mamba":
777
+ hidden_states = self.mixer(
778
+ hidden_states, cache_params=cache_params, cache_position=cache_position
779
+ )
780
+ elif self.block_type == "attention":
781
+ hidden_states = self.mixer(
782
+ hidden_states, cache_position=cache_position
783
+ )
784
+ hidden_states = hidden_states[0]
785
+ elif self.block_type in ["mlp", "moe"]:
786
+ hidden_states = self.mixer(
787
+ hidden_states
788
+ )
789
+ else:
790
+ raise ValueError(f"Invalid block_type: {self.block_type}")
791
+
792
+ hidden_states = residual + hidden_states
793
+ return hidden_states
794
+
795
+
796
+ # Copied from transformers.models.nemotron.modeling_nemotron Nemotron->NemotronH
797
+ class NemotronHMLP(nn.Module):
798
+ def __init__(self, config, intermediate_size=None, layer_idx: Optional[int] = None):
799
+ super().__init__()
800
+ self.config = config
801
+ self.layer_idx = layer_idx
802
+ if layer_idx is None:
803
+ logger.warning_once(
804
+ f"Instantiating {self.__class__.__name__} without passing a `layer_idx` is not recommended and will "
805
+ "lead to errors during the forward call if caching is used. Please make sure to provide a `layer_idx` "
806
+ "when creating this class."
807
+ )
808
+ self.hidden_size = config.hidden_size
809
+ self.intermediate_size = intermediate_size or config.intermediate_size
810
+ self.up_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=config.mlp_bias)
811
+ self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=config.mlp_bias)
812
+ self.act_fn = ACT2FN[config.mlp_hidden_act]
813
+
814
+ def forward(self, x):
815
+ return self.down_proj(self.act_fn(self.up_proj(x)))
816
+
817
+
818
+ class NemotronHMOE(nn.Module):
819
+ def __init__(self, config, layer_idx: Optional[int] = None):
820
+ super().__init__()
821
+ self.config = config
822
+ self.experts = nn.ModuleList(
823
+ [
824
+ NemotronHMLP(config, intermediate_size=config.moe_intermediate_size, layer_idx=layer_idx)
825
+ for _ in range(config.n_routed_experts)
826
+ ]
827
+ )
828
+ self.gate = NemotronHTopkRouter(config)
829
+ self.shared_experts = NemotronHMLP(
830
+ config=config, intermediate_size=config.moe_shared_expert_intermediate_size, layer_idx=layer_idx
831
+ )
832
+
833
+ def moe(self, hidden_states: torch.Tensor, topk_indices: torch.Tensor, topk_weights: torch.Tensor):
834
+ r"""
835
+ CALL FOR CONTRIBUTION! I don't have time to optimise this right now, but expert weights need to be fused
836
+ to not have to do a loop here (deepseek has 256 experts soooo yeah).
837
+ """
838
+ final_hidden_states = torch.zeros_like(hidden_states, dtype=topk_weights.dtype)
839
+ expert_mask = torch.nn.functional.one_hot(topk_indices, num_classes=len(self.experts))
840
+ expert_mask = expert_mask.permute(2, 0, 1)
841
+
842
+ for expert_idx in range(len(self.experts)):
843
+ expert = self.experts[expert_idx]
844
+ mask = expert_mask[expert_idx]
845
+ token_indices, weight_indices = torch.where(mask)
846
+
847
+ if token_indices.numel() > 0:
848
+ expert_weights = topk_weights[token_indices, weight_indices]
849
+ expert_input = hidden_states[token_indices]
850
+ expert_output = expert(expert_input)
851
+ weighted_output = expert_output * expert_weights.unsqueeze(-1)
852
+ final_hidden_states.index_add_(0, token_indices, weighted_output)
853
+ else:
854
+ # Local empty expert: no-op compute that still marks params as used.
855
+ expert_dtype = expert.down_proj.weight.dtype
856
+ dummy_out = expert(torch.zeros_like(hidden_states[0]).unsqueeze(0).to(expert_dtype))
857
+ final_hidden_states = final_hidden_states + dummy_out
858
+
859
+ # in original deepseek, the output of the experts are gathered once we leave this module
860
+ # thus the moe module is itelsf an IsolatedParallel module
861
+ # and all expert are "local" meaning we shard but we don't gather
862
+ return final_hidden_states.type(hidden_states.dtype)
863
+
864
+ def forward(self, hidden_states):
865
+ residuals = hidden_states
866
+ orig_shape = hidden_states.shape
867
+ topk_indices, topk_weights = self.gate(hidden_states)
868
+ hidden_states = hidden_states.view(-1, hidden_states.shape[-1])
869
+ hidden_states = self.moe(hidden_states, topk_indices, topk_weights).view(*orig_shape)
870
+ hidden_states = hidden_states + self.shared_experts(residuals)
871
+ return hidden_states
872
+
873
+
874
+ class NemotronHTopkRouter(nn.Module):
875
+ def __init__(self, config):
876
+ super().__init__()
877
+ self.config = config
878
+ self.top_k = config.num_experts_per_tok
879
+ self.n_routed_experts = config.n_routed_experts
880
+ self.routed_scaling_factor = config.routed_scaling_factor
881
+ self.n_group = config.n_group
882
+ self.topk_group = config.topk_group
883
+ self.norm_topk_prob = config.norm_topk_prob
884
+
885
+ self.weight = nn.Parameter(torch.empty((self.n_routed_experts, config.hidden_size), dtype=torch.float32))
886
+ self.register_buffer("e_score_correction_bias", torch.zeros(self.n_routed_experts, dtype=torch.float32))
887
+
888
+ @torch.no_grad()
889
+ def get_topk_indices(self, scores):
890
+ scores_for_choice = scores.view(-1, self.n_routed_experts) + self.e_score_correction_bias.unsqueeze(0)
891
+ group_scores = (
892
+ scores_for_choice.view(-1, self.n_group, self.n_routed_experts // self.n_group)
893
+ .topk(2, dim=-1)[0]
894
+ .sum(dim=-1)
895
+ )
896
+ group_idx = torch.topk(group_scores, k=self.topk_group, dim=-1, sorted=False)[1]
897
+ group_mask = torch.zeros_like(group_scores)
898
+ group_mask.scatter_(1, group_idx, 1)
899
+ score_mask = (
900
+ group_mask.unsqueeze(-1)
901
+ .expand(-1, self.n_group, self.n_routed_experts // self.n_group)
902
+ .reshape(-1, self.n_routed_experts)
903
+ )
904
+ scores_for_choice = scores_for_choice.masked_fill(~score_mask.bool(), 0.0)
905
+ topk_indices = torch.topk(scores_for_choice, k=self.top_k, dim=-1, sorted=False)[1]
906
+ return topk_indices
907
+
908
+ def forward(self, hidden_states):
909
+ hidden_states = hidden_states.view(-1, self.config.hidden_size)
910
+ router_logits = F.linear(hidden_states.type(torch.float32), self.weight.type(torch.float32))
911
+ scores = router_logits.sigmoid()
912
+ topk_indices = self.get_topk_indices(scores)
913
+ topk_weights = scores.gather(1, topk_indices)
914
+ if self.norm_topk_prob:
915
+ denominator = topk_weights.sum(dim=-1, keepdim=True) + 1e-20
916
+ topk_weights /= denominator
917
+ topk_weights = topk_weights * self.routed_scaling_factor
918
+ return topk_indices, topk_weights
919
+
920
+ # Copied from transformers.models.llama.modeling_llama.repeat_kv
921
+ def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
922
+ """
923
+ This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
924
+ num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
925
+ """
926
+ batch, num_key_value_heads, slen, head_dim = hidden_states.shape
927
+ if n_rep == 1:
928
+ return hidden_states
929
+ hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim)
930
+ return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
931
+
932
+
933
+ class NemotronHAttention(nn.Module):
934
+ """Multi-headed attention from 'Attention Is All You Need' paper"""
935
+
936
+ def __init__(self, config: NemotronHConfig, layer_idx: Optional[int] = None):
937
+ super().__init__()
938
+ self.config = config
939
+ self.layer_idx = layer_idx
940
+ if layer_idx is None:
941
+ logger.warning_once(
942
+ f"Instantiating {self.__class__.__name__} without passing a `layer_idx` is not recommended and will "
943
+ "lead to errors during the forward call if caching is used. Please make sure to provide a `layer_idx` "
944
+ "when creating this class."
945
+ )
946
+
947
+ self.attention_dropout = config.attention_dropout
948
+ self.hidden_size = config.hidden_size
949
+ self.num_heads = config.num_attention_heads
950
+ if hasattr(config, "head_dim") and config.head_dim is not None:
951
+ self.head_dim = config.head_dim
952
+ else:
953
+ self.head_dim = config.hidden_size // self.num_attention_heads
954
+ self.num_key_value_heads = config.num_key_value_heads
955
+ self.num_key_value_groups = self.num_heads // self.num_key_value_heads
956
+ self.max_position_embeddings = config.max_position_embeddings
957
+ self.is_causal = True
958
+
959
+ self.q_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=config.attention_bias)
960
+ self.k_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=config.attention_bias)
961
+ self.v_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=config.attention_bias)
962
+ self.o_proj = nn.Linear(self.head_dim * self.num_heads, self.hidden_size, bias=config.attention_bias)
963
+
964
+ def forward(
965
+ self,
966
+ hidden_states: torch.Tensor,
967
+ # position_embeddings: Tuple[torch.Tensor, torch.Tensor], #TODO
968
+ attention_mask: Optional[torch.Tensor] = None,
969
+ position_ids: Optional[torch.LongTensor] = None,
970
+ past_key_value: Optional[HybridMambaAttentionDynamicCache] = None,
971
+ output_attentions: bool = False,
972
+ use_cache: bool = False,
973
+ cache_position: Optional[torch.LongTensor] = None,
974
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
975
+ bsz, q_len, _ = hidden_states.size()
976
+
977
+ query_states = self.q_proj(hidden_states)
978
+ key_states = self.k_proj(hidden_states)
979
+ value_states = self.v_proj(hidden_states)
980
+
981
+ query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
982
+ key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
983
+ value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
984
+
985
+ if past_key_value is not None:
986
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx)
987
+
988
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
989
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
990
+
991
+ causal_mask = attention_mask
992
+ if attention_mask is not None: # no matter the length, we just slice it
993
+ causal_mask = attention_mask[:, :, :, : key_states.shape[-2]]
994
+
995
+ if query_states.device.type == "cuda" and attention_mask is not None:
996
+ query_states = query_states.contiguous()
997
+ key_states = key_states.contiguous()
998
+ value_states = value_states.contiguous()
999
+
1000
+ is_causal = True if causal_mask is None and q_len > 1 else False
1001
+
1002
+ attn_output = torch.nn.functional.scaled_dot_product_attention(
1003
+ query_states,
1004
+ key_states,
1005
+ value_states,
1006
+ attn_mask=causal_mask,
1007
+ dropout_p=self.attention_dropout if self.training else 0.0,
1008
+ is_causal=is_causal,
1009
+ )
1010
+ attn_output = attn_output.transpose(1, 2).contiguous()
1011
+ #attn_output = attn_output.view(bsz, q_len, self.hidden_size)
1012
+ attn_output = attn_output.view(bsz, q_len, self.num_heads * self.head_dim)
1013
+
1014
+ attn_output = self.o_proj(attn_output)
1015
+
1016
+ return attn_output, None, past_key_value
1017
+
1018
+
1019
+ # Adapted from transformers.models.mistral.modeling_mistral.MistralFlashAttention2 with Mistral->Jamba
1020
+ #class JambaFlashAttention2(JambaAttention):
1021
+ class NemotronHFlashAttention2(NemotronHAttention):
1022
+ """
1023
+ Jamba flash attention module. This module inherits from `JambaAttention` as the weights of the module stays
1024
+ untouched. The only required change would be on the forward pass where it needs to correctly call the public API of
1025
+ flash attention and deal with padding tokens in case the input contains any of them.
1026
+ """
1027
+ def __init__(self, *args, **kwargs):
1028
+ super().__init__(*args, **kwargs)
1029
+
1030
+ # TODO: Should be removed once Flash Attention for RoCm is bumped to 2.1.
1031
+ # flash_attn<2.1 generates top-left aligned causal mask, while what is needed here is bottom-right alignement, that was made default for flash_attn>=2.1. This attribute is used to handle this difference. Reference: https://github.com/Dao-AILab/flash-attention/releases/tag/v2.1.0.
1032
+ # Beware that with flash_attn<2.1, using q_seqlen != k_seqlen (except for the case q_seqlen == 1) produces a wrong mask (top-left).
1033
+ self._flash_attn_uses_top_left_mask = not is_flash_attn_greater_or_equal_2_10()
1034
+
1035
+ def forward(
1036
+ self,
1037
+ hidden_states: torch.Tensor,
1038
+ attention_mask: Optional[torch.Tensor] = None,
1039
+ position_ids: Optional[torch.LongTensor] = None,
1040
+ past_key_value: Optional[HybridMambaAttentionDynamicCache] = None,
1041
+ output_attentions: bool = False,
1042
+ use_cache: bool = False,
1043
+ cache_position: Optional[torch.LongTensor] = None,
1044
+ **kwargs,
1045
+ ):
1046
+ bsz, q_len, _ = hidden_states.size()
1047
+
1048
+ query_states = self.q_proj(hidden_states)
1049
+ key_states = self.k_proj(hidden_states)
1050
+ value_states = self.v_proj(hidden_states)
1051
+
1052
+ # Flash attention requires the input to have the shape
1053
+ # batch_size x seq_length x head_dim x hidden_dim
1054
+ # therefore we just need to keep the original shape
1055
+ query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim)
1056
+ key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
1057
+ value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
1058
+
1059
+ if past_key_value is not None:
1060
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx)
1061
+
1062
+ # repeat k/v heads if n_kv_heads < n_heads
1063
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
1064
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
1065
+ dropout_rate = 0.0 if not self.training else self.attention_dropout
1066
+
1067
+ # In PEFT, usually we cast the layer norms in float32 for training stability reasons
1068
+ # therefore the input hidden states gets silently casted in float32. Hence, we need
1069
+ # cast them back in float16 just to be sure everything works as expected.
1070
+ input_dtype = query_states.dtype
1071
+ if input_dtype == torch.float32:
1072
+ if torch.is_autocast_enabled():
1073
+ target_dtype = torch.get_autocast_gpu_dtype()
1074
+ # Handle the case where the model is quantized
1075
+ elif hasattr(self.config, "_pre_quantization_dtype"):
1076
+ target_dtype = self.config._pre_quantization_dtype
1077
+ else:
1078
+ target_dtype = self.q_proj.weight.dtype
1079
+
1080
+ logger.warning_once(
1081
+ f"The input hidden states seems to be silently casted in float32, this might be related to"
1082
+ f" the fact you have upcasted embedding or layer norm layers in float32. We will cast back the input in"
1083
+ f" {target_dtype}."
1084
+ )
1085
+
1086
+ query_states = query_states.to(target_dtype)
1087
+ key_states = key_states.to(target_dtype)
1088
+ value_states = value_states.to(target_dtype)
1089
+
1090
+ # Reashape to the expected shape for Flash Attention
1091
+ key_states = key_states.transpose(1, 2)
1092
+ value_states = value_states.transpose(1, 2)
1093
+
1094
+ attn_output = _flash_attention_forward(
1095
+ query_states,
1096
+ key_states,
1097
+ value_states,
1098
+ attention_mask,
1099
+ q_len,
1100
+ dropout=dropout_rate,
1101
+ sliding_window=getattr(self.config, "sliding_window", None),
1102
+ is_causal=self.is_causal,
1103
+ use_top_left_mask=self._flash_attn_uses_top_left_mask,
1104
+ )
1105
+
1106
+ #attn_output = attn_output.reshape(bsz, q_len, self.hidden_size).contiguous()
1107
+ attn_output = attn_output.reshape(bsz, q_len, self.num_heads * self.head_dim).contiguous()
1108
+ attn_output = self.o_proj(attn_output)
1109
+
1110
+ if not output_attentions:
1111
+ attn_weights = None
1112
+
1113
+ return attn_output, attn_weights, past_key_value
1114
+
1115
+
1116
+ # Adapted from transformers.models.mistral.modeling_mistral.MistralSdpaAttention with Mistral->Jamba
1117
+ #class JambaSdpaAttention(JambaAttention):
1118
+ class NemotronHSdpaAttention(NemotronHAttention):
1119
+ """
1120
+ Jamba attention module using torch.nn.functional.scaled_dot_product_attention. This module inherits from
1121
+ `JambaAttention` as the weights of the module stays untouched. The only changes are on the forward pass to adapt to
1122
+ SDPA API.
1123
+ """
1124
+
1125
+ # Adapted from NemotronHAttention.forward
1126
+ def forward(
1127
+ self,
1128
+ hidden_states: torch.Tensor,
1129
+ attention_mask: Optional[torch.Tensor] = None,
1130
+ position_ids: Optional[torch.LongTensor] = None,
1131
+ past_key_value: Optional[HybridMambaAttentionDynamicCache] = None,
1132
+ output_attentions: bool = False,
1133
+ use_cache: bool = False,
1134
+ cache_position: Optional[torch.LongTensor] = None,
1135
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
1136
+ if output_attentions:
1137
+ # TODO: Improve this warning with e.g. `model.config.attn_implementation = "manual"` once this is implemented.
1138
+ logger.warning_once(
1139
+ "NemotronHModel is using NemotronHSdpaAttention, but `torch.nn.functional.scaled_dot_product_attention` does not support `output_attentions=True`. Falling back to the manual attention implementation, "
1140
+ 'but specifying the manual implementation will be required from Transformers version v5.0.0 onwards. This warning can be removed using the argument `attn_implementation="eager"` when loading the model.'
1141
+ )
1142
+ return super().forward(
1143
+ hidden_states=hidden_states,
1144
+ attention_mask=attention_mask,
1145
+ position_ids=position_ids,
1146
+ past_key_value=past_key_value,
1147
+ output_attentions=output_attentions,
1148
+ use_cache=use_cache,
1149
+ )
1150
+
1151
+ bsz, q_len, _ = hidden_states.size()
1152
+
1153
+ query_states = self.q_proj(hidden_states)
1154
+ key_states = self.k_proj(hidden_states)
1155
+ value_states = self.v_proj(hidden_states)
1156
+
1157
+ query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
1158
+ key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
1159
+ value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
1160
+
1161
+ if past_key_value is not None:
1162
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx)
1163
+
1164
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
1165
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
1166
+
1167
+ causal_mask = attention_mask
1168
+ if attention_mask is not None:
1169
+ causal_mask = causal_mask[:, :, :, : key_states.shape[-2]]
1170
+
1171
+ # SDPA with memory-efficient backend is currently (torch==2.1.2) bugged with non-contiguous inputs with custom attn_mask,
1172
+ # Reference: https://github.com/pytorch/pytorch/issues/112577.
1173
+ if query_states.device.type == "cuda" and attention_mask is not None:
1174
+ query_states = query_states.contiguous()
1175
+ key_states = key_states.contiguous()
1176
+ value_states = value_states.contiguous()
1177
+
1178
+ # We dispatch to SDPA's Flash Attention or Efficient kernels via this `is_causal` if statement instead of an inline conditional assignment
1179
+ # in SDPA to support both torch.compile's dynamic shapes and full graph options. An inline conditional prevents dynamic shapes from compiling.
1180
+ # The q_len > 1 is necessary to match with AttentionMaskConverter.to_causal_4d that does not create a causal mask in case q_len == 1.
1181
+ is_causal = True if self.is_causal and causal_mask is None and q_len > 1 else False
1182
+
1183
+ attn_output = torch.nn.functional.scaled_dot_product_attention(
1184
+ query_states,
1185
+ key_states,
1186
+ value_states,
1187
+ attn_mask=causal_mask,
1188
+ dropout_p=self.attention_dropout if self.training else 0.0,
1189
+ is_causal=is_causal,
1190
+ )
1191
+
1192
+ attn_output = attn_output.transpose(1, 2).contiguous()
1193
+ attn_output = attn_output.view(bsz, q_len, self.hidden_size)
1194
+
1195
+ attn_output = self.o_proj(attn_output)
1196
+
1197
+ return attn_output, None, past_key_value
1198
+
1199
+
1200
+ NEMOTRONH_ATTENTION_CLASSES = {
1201
+ "eager": NemotronHAttention,
1202
+ "flash_attention_2": NemotronHFlashAttention2,
1203
+ "sdpa": NemotronHSdpaAttention,
1204
+ }
1205
+
1206
+ # Copied from transformers.models.mamba.modeling_mamba2.Mamba2PreTrainedModel
1207
+ class NemotronHPreTrainedModel(PreTrainedModel):
1208
+ """
1209
+ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
1210
+ models.
1211
+ """
1212
+
1213
+ config_class = NemotronHConfig
1214
+ base_model_prefix = "backbone"
1215
+ _no_split_modules = ["NemotronHBlock"]
1216
+ supports_gradient_checkpointing = True
1217
+ _is_stateful = True
1218
+
1219
+ def _init_weights(self, module):
1220
+ """Initialize the weights."""
1221
+ if isinstance(module, NemotronHMamba2Mixer):
1222
+ if getattr(module.dt_bias, "_is_hf_initialized", False):
1223
+ return
1224
+ module.A_log._no_weight_decay = True
1225
+ module.D._no_weight_decay = True
1226
+
1227
+ dt = torch.exp(
1228
+ torch.rand(self.config.mamba_num_heads)
1229
+ * (math.log(self.config.time_step_max) - math.log(self.config.time_step_min))
1230
+ + math.log(self.config.time_step_min)
1231
+ ).clamp(min=self.config.time_step_floor)
1232
+
1233
+ # # Inverse of softplus: https://github.com/pytorch/pytorch/issues/72759
1234
+ inv_dt = dt + torch.log(-torch.expm1(-dt))
1235
+ with torch.no_grad():
1236
+ module.dt_bias.copy_(inv_dt)
1237
+ module.dt_bias._no_reinit = True
1238
+
1239
+ if isinstance(module, nn.Linear):
1240
+ if module.bias is not None:
1241
+ if not getattr(module.bias, "_no_reinit", False):
1242
+ nn.init.zeros_(module.bias)
1243
+ elif isinstance(module, nn.Embedding):
1244
+ nn.init.normal_(module.weight, std=self.config.initializer_range)
1245
+
1246
+ # TODO: Check
1247
+ if self.config.rescale_prenorm_residual:
1248
+ # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme:
1249
+ # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale
1250
+ # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers.
1251
+ # > -- GPT-2 :: https://openai.com/blog/better-language-models/
1252
+ #
1253
+ # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py
1254
+ for name, p in module.named_parameters():
1255
+ if getattr(p, "_is_hf_initialized", False):
1256
+ continue
1257
+ if name in ["out_proj.weight"]:
1258
+ # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block
1259
+ # Following Pytorch init, except scale by 1/sqrt(2 * n_layer)
1260
+ # We need to reinit p since this code could be called multiple times
1261
+ # Having just p *= scale would repeatedly scale it down
1262
+ nn.init.kaiming_uniform_(p, a=math.sqrt(5))
1263
+ with torch.no_grad():
1264
+ p /= math.sqrt(self.config.num_hidden_layers)
1265
+
1266
+
1267
+ @dataclass
1268
+ # Copied from transformers.models.mamba.modeling_mamba2.Mamba2Output with MAMBA2->NemotronH,Mamba2->NemotronH
1269
+ class NemotronHOutput(ModelOutput):
1270
+ """
1271
+ Class for the NemotronH model outputs.
1272
+
1273
+ Args:
1274
+ last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
1275
+ Sequence of hidden-states at the output of the last layer of the model.
1276
+ cache_params (`HybridMambaAttentionDynamicCache`):
1277
+ The state of the model at the last time step. Can be used in a forward method with the next `input_ids` to
1278
+ avoid providing the old `input_ids`.
1279
+
1280
+ Includes both the State space model state matrices after the selective scan, and the Convolutional states
1281
+ hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
1282
+ Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
1283
+ one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
1284
+
1285
+ Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
1286
+ """
1287
+
1288
+ last_hidden_state: Optional[torch.FloatTensor] = None
1289
+ cache_params: Optional[HybridMambaAttentionDynamicCache] = None
1290
+ hidden_states: Optional[Tuple[torch.FloatTensor]] = None
1291
+ attentions: Optional[Tuple[torch.FloatTensor]] = None
1292
+
1293
+
1294
+ @dataclass
1295
+ # Copied from transformers.models.mamba2.modeling_mamba2.MambaCausalLMOutput with Mamba2->NemotronH
1296
+ class NemotronHCausalLMOutput(ModelOutput):
1297
+ """
1298
+ Base class for causal language model (or autoregressive) outputs.
1299
+
1300
+ Args:
1301
+ loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
1302
+ Language modeling loss (for next-token prediction).
1303
+ logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
1304
+ Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
1305
+ cache_params (`HybridMambaAttentionDynamicCache`):
1306
+ The state of the model at the last time step. Can be used in a forward method with the next `input_ids` to
1307
+ avoid providing the old `input_ids`.
1308
+
1309
+ Includes both the State space model state matrices after the selective scan, and the Convolutional states
1310
+ hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
1311
+ Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
1312
+ one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
1313
+
1314
+ Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
1315
+ """
1316
+
1317
+ loss: Optional[torch.FloatTensor] = None
1318
+ logits: Optional[torch.FloatTensor] = None
1319
+ cache_params: Optional[HybridMambaAttentionDynamicCache] = None
1320
+ hidden_states: Optional[Tuple[torch.FloatTensor]] = None
1321
+ attentions: Optional[Tuple[torch.FloatTensor]] = None
1322
+
1323
+
1324
+ NEMOTRONH_START_DOCSTRING = r"""
1325
+
1326
+ This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
1327
+ library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
1328
+ etc.)
1329
+
1330
+ This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
1331
+ Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
1332
+ and behavior.
1333
+
1334
+ Parameters:
1335
+ config ([`NemotronHConfig`]): Model configuration class with all the parameters of the model.
1336
+ Initializing with a config file does not load the weights associated with the model, only the
1337
+ configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
1338
+ """
1339
+
1340
+ NEMOTRONH_INPUTS_DOCSTRING = r"""
1341
+ Args:
1342
+ input_ids (`torch.LongTensor` of shape `(batch_size, input_ids_length)`, *optional*):
1343
+ Indices of input sequence tokens in the vocabulary.
1344
+
1345
+ If `cache_params.seqlen_offset>0`, only `input_ids` that do not have their past calculated should be passed as
1346
+ `input_ids`.
1347
+
1348
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
1349
+ [`PreTrainedTokenizer.__call__`] for details.
1350
+
1351
+ [What are input IDs?](../glossary#input-ids)
1352
+ inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
1353
+ Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
1354
+ is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
1355
+ model's internal embedding lookup matrix.
1356
+ position_ids (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1357
+ Indices of positions of each input sequence tokens in the position embeddings.
1358
+ cache_params (`HybridMambaAttentionDynamicCache`, *optional*):
1359
+ If passed along, the model uses the previous state in all the blocks (which will give the output for the
1360
+ `input_ids` provided as if the model add `state_input_ids + input_ids` as context).
1361
+ use_cache (`bool`, *optional*):
1362
+ If set to `True`, the `cache_params` is returned and can be used to quickly generate the next logits.
1363
+ output_attentions (`bool`, *optional*):
1364
+ Whether or not to return the attentions tensors of all attention layers.
1365
+ output_hidden_states (`bool`, *optional*):
1366
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
1367
+ more detail.
1368
+ return_dict (`bool`, *optional*):
1369
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
1370
+ cache_position (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1371
+ The position of the current input in the cache. This is used to ensure that the cache is correctly updated.
1372
+ If `cache_params` is passed, `cache_position` should also be passed.
1373
+ attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*):
1374
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
1375
+
1376
+ - 1 for tokens that are **not masked**,
1377
+ - 0 for tokens that are **masked**.
1378
+
1379
+ [What are attention masks?](../glossary#attention-mask)
1380
+ """
1381
+
1382
+
1383
+ @add_start_docstrings(
1384
+ "The bare NemotronH Model transformer outputting raw hidden-states without any specific head on top.",
1385
+ NEMOTRONH_START_DOCSTRING,
1386
+ )
1387
+ class NemotronHModel(NemotronHPreTrainedModel):
1388
+ def __init__(self, config):
1389
+ super().__init__(config)
1390
+
1391
+ self.embeddings = nn.Embedding(config.vocab_size, config.hidden_size)
1392
+ self.layers = nn.ModuleList([NemotronHBlock(config, layer_idx=idx) for idx in range(config.num_hidden_layers)])
1393
+
1394
+ self.gradient_checkpointing = False
1395
+ self.norm_f = NemotronHRMSNorm(config.hidden_size, eps=config.layer_norm_epsilon)
1396
+ # Initialize weights and apply final processing
1397
+ self._register_load_state_dict_pre_hook(self.load_hook)
1398
+ self.post_init()
1399
+
1400
+ def load_hook(self, state_dict, prefix, *args):
1401
+ for k in state_dict:
1402
+ if "embedding." in k:
1403
+ state_dict[k.replace("embedding.", "embeddings.")] = state_dict.pop(k)
1404
+ break
1405
+
1406
+ def get_input_embeddings(self):
1407
+ return self.embeddings
1408
+
1409
+ def set_input_embeddings(self, new_embeddings):
1410
+ self.embeddings = new_embeddings
1411
+
1412
+ @add_start_docstrings_to_model_forward(NEMOTRONH_INPUTS_DOCSTRING)
1413
+ @add_code_sample_docstrings(
1414
+ checkpoint=_CHECKPOINT_FOR_DOC,
1415
+ output_type=NemotronHOutput,
1416
+ config_class=_CONFIG_FOR_DOC,
1417
+ )
1418
+ def forward(
1419
+ self,
1420
+ input_ids: Optional[torch.LongTensor] = None,
1421
+ inputs_embeds: Optional[torch.LongTensor] = None,
1422
+ position_ids: Optional[torch.LongTensor] = None,
1423
+ cache_params: Optional[HybridMambaAttentionDynamicCache] = None,
1424
+ use_cache: Optional[bool] = None,
1425
+ output_attentions: Optional[bool] = None,
1426
+ output_hidden_states: Optional[bool] = None,
1427
+ return_dict: Optional[bool] = None,
1428
+ cache_position: Optional[torch.LongTensor] = None,
1429
+ attention_mask: Optional[torch.Tensor] = None,
1430
+ **kwargs,
1431
+ ) -> Union[Tuple, NemotronHOutput]:
1432
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1433
+ output_hidden_states = (
1434
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1435
+ )
1436
+ # use_cache = use_cache if use_cache is not None else self.config.use_cache
1437
+ use_cache = use_cache if use_cache is not None else (self.config.use_cache if not self.training else False)
1438
+
1439
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1440
+
1441
+ if (input_ids is None) ^ (inputs_embeds is not None): # ^ is python for xor
1442
+ raise ValueError("You must specify exactly one of input_ids or inputs_embeds")
1443
+
1444
+ if inputs_embeds is None:
1445
+ inputs_embeds = self.embeddings(input_ids)
1446
+
1447
+ if self.gradient_checkpointing and self.training and use_cache:
1448
+ logger.warning_once(
1449
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`."
1450
+ )
1451
+ use_cache = False
1452
+
1453
+ # From zamba_modeling.py
1454
+ if use_cache and cache_params is None:
1455
+ logger.warning_once(
1456
+ "NemotronH requires an initialized `NemotronHHybridDynamicCache` to return a cache. None was "
1457
+ "provided, so no cache will be returned."
1458
+ )
1459
+
1460
+ hidden_states = inputs_embeds
1461
+
1462
+ if cache_position is None:
1463
+ cache_position = torch.arange(hidden_states.shape[1], device=hidden_states.device)
1464
+ if position_ids is None:
1465
+ position_ids = cache_position.unsqueeze(0)
1466
+
1467
+ causal_mask = self._update_causal_mask(attention_mask, inputs_embeds, cache_position)
1468
+ mamba_mask = self._update_mamba_mask(attention_mask, cache_position)
1469
+
1470
+ all_hidden_states = () if output_hidden_states else None
1471
+ all_self_attns = () if output_attentions else None
1472
+ # Until HERE
1473
+
1474
+ for layer_idx, mixer_block in enumerate(self.layers):
1475
+ # Depending on the layer type we opt for 2D base attention mask (Mamba) or 4D causal mask (Attention)
1476
+ if mixer_block.block_type == "mamba":
1477
+ layer_mask = mamba_mask
1478
+ elif mixer_block.block_type == "attention":
1479
+ layer_mask = causal_mask
1480
+ elif mixer_block.block_type in ["mlp", "moe"]:
1481
+ layer_mask = None
1482
+ else:
1483
+ raise ValueError(f"Invalid block_type: {self.block_type}")
1484
+
1485
+ if output_hidden_states:
1486
+ all_hidden_states += (hidden_states,)
1487
+
1488
+ if self.gradient_checkpointing and self.training:
1489
+ hidden_states = self._gradient_checkpointing_func(
1490
+ mixer_block.__call__, hidden_states, cache_params, cache_position, layer_mask
1491
+ )
1492
+ else:
1493
+ hidden_states = mixer_block(
1494
+ hidden_states,
1495
+ cache_params=cache_params,
1496
+ cache_position=cache_position,
1497
+ attention_mask=layer_mask,
1498
+ )
1499
+
1500
+ # TODO: Store attentions
1501
+ # if output_attentions:
1502
+ # if layer_outputs[1] is not None:
1503
+ # # append attentions only of attention layers. Mamba layers return `None` as the attention weights
1504
+ # all_self_attns += (layer_outputs[1],)
1505
+
1506
+ # TODO (Check): should it happen before the forward pass?
1507
+ # if output_hidden_states:
1508
+ # all_hidden_states = all_hidden_states + (hidden_states,)
1509
+
1510
+ hidden_states = self.norm_f(hidden_states)
1511
+
1512
+ if output_hidden_states:
1513
+ all_hidden_states = all_hidden_states + (hidden_states,)
1514
+
1515
+ if not return_dict:
1516
+ return tuple(v for v in [hidden_states, cache_params, all_hidden_states] if v is not None)
1517
+
1518
+ return NemotronHOutput(
1519
+ last_hidden_state=hidden_states,
1520
+ cache_params=cache_params if use_cache else None,
1521
+ hidden_states=all_hidden_states,
1522
+ attentions=all_self_attns,
1523
+ )
1524
+
1525
+ # Copied from transformers.models.jamba.modeling_jamba.JambaModel._update_causal_mask
1526
+ def _update_causal_mask(self, attention_mask, input_tensor, cache_position):
1527
+ if self.config._attn_implementation == "flash_attention_2":
1528
+ if attention_mask is not None and 0.0 in attention_mask:
1529
+ return attention_mask
1530
+ return None
1531
+
1532
+ dtype, device = input_tensor.dtype, input_tensor.device
1533
+ min_dtype = torch.finfo(dtype).min
1534
+ sequence_length = input_tensor.shape[1]
1535
+ target_length = cache_position[-1] + 1
1536
+
1537
+ causal_mask = torch.full((sequence_length, target_length), fill_value=min_dtype, dtype=dtype, device=device)
1538
+ if sequence_length != 1:
1539
+ causal_mask = torch.triu(causal_mask, diagonal=1)
1540
+ causal_mask *= torch.arange(target_length, device=device) > cache_position.reshape(-1, 1)
1541
+ causal_mask = causal_mask[None, None, :, :].expand(input_tensor.shape[0], 1, -1, -1)
1542
+ if attention_mask is not None:
1543
+ causal_mask = causal_mask.clone() # copy to contiguous memory for in-place edit
1544
+ if attention_mask.dim() == 2:
1545
+ mask_length = attention_mask.shape[-1]
1546
+ padding_mask = causal_mask[..., :mask_length].eq(0.0) * attention_mask[:, None, None, :].eq(0.0)
1547
+ causal_mask[..., :mask_length] = causal_mask[..., :mask_length].masked_fill(padding_mask, min_dtype)
1548
+
1549
+ if (
1550
+ self.config._attn_implementation == "sdpa"
1551
+ and attention_mask is not None
1552
+ and attention_mask.device.type == "cuda"
1553
+ ):
1554
+ # Attend to all tokens in fully masked rows in the causal_mask, for example the relevant first rows when
1555
+ # using left padding. This is required by F.scaled_dot_product_attention memory-efficient attention path.
1556
+ # Details: https://github.com/pytorch/pytorch/issues/110213
1557
+ causal_mask = AttentionMaskConverter._unmask_unattended(causal_mask, min_dtype)
1558
+
1559
+ return causal_mask
1560
+
1561
+ def _update_mamba_mask(self, attention_mask, cache_position):
1562
+ """
1563
+ No need for zeroing states when
1564
+ 1. Cached forward
1565
+ 2. Attending to all inputs
1566
+ """
1567
+ mamba_mask = attention_mask
1568
+ if cache_position[0] > 0 or (attention_mask is not None and torch.all(attention_mask == 1)):
1569
+ mamba_mask = None
1570
+ return mamba_mask
1571
+
1572
+
1573
+ @add_start_docstrings(
1574
+ """
1575
+ The NEMOTRONH Model transformer with a language modeling head on top (linear layer with weights not tied to the input
1576
+ embeddings).
1577
+ """,
1578
+ NEMOTRONH_START_DOCSTRING,
1579
+ )
1580
+ class NemotronHForCausalLM(NemotronHPreTrainedModel, GenerationMixin):
1581
+ _tied_weights_keys = ["lm_head.weight"]
1582
+
1583
+ def __init__(self, config):
1584
+ super().__init__(config)
1585
+ self.backbone = NemotronHModel(config)
1586
+ self.vocab_size = config.vocab_size
1587
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
1588
+
1589
+ # Initialize weights and apply final processing
1590
+ self.post_init()
1591
+
1592
+ def get_input_embeddings(self):
1593
+ return self.backbone.get_input_embeddings()
1594
+
1595
+ def set_input_embeddings(self, new_embeddings):
1596
+ return self.backbone.set_input_embeddings(new_embeddings)
1597
+
1598
+ def get_output_embeddings(self):
1599
+ return self.lm_head
1600
+
1601
+ def set_output_embeddings(self, new_embeddings):
1602
+ self.lm_head = new_embeddings
1603
+
1604
+ def get_decoder(self):
1605
+ return self.model
1606
+
1607
+ def set_decoder(self, decoder):
1608
+ self.model = decoder
1609
+
1610
+ def prepare_inputs_for_generation(
1611
+ self,
1612
+ input_ids,
1613
+ past_key_values=None,
1614
+ attention_mask=None,
1615
+ inputs_embeds=None,
1616
+ cache_position=None,
1617
+ position_ids=None,
1618
+ use_cache=True,
1619
+ **kwargs,
1620
+ ):
1621
+ # Copy from https://github.com/huggingface/transformers/blob/main/src/transformers/models/jamba/modeling_jamba.py
1622
+ # Overwitten -- uses `cache_params` as opposed to `past_key_values`
1623
+ empty_past_kv = past_key_values is None
1624
+
1625
+ # If we have cache: let's slice `input_ids` through `cache_position`, to keep only the unprocessed tokens
1626
+ # Exception 1: when passing input_embeds, input_ids may be missing entries
1627
+ # Exception 2: some generation methods do special slicing of input_ids, so we don't need to do it here
1628
+ # Exception 3: with synced GPUs cache_position may go out of bounds, but we only want dummy token in that case.
1629
+ # (we can't check exception 3 while compiling)
1630
+ if not empty_past_kv:
1631
+ if (
1632
+ inputs_embeds is not None # Exception 1
1633
+ or cache_position[-1] >= input_ids.shape[1] # Exception 3
1634
+ ):
1635
+ input_ids = input_ids[:, -cache_position.shape[0] :]
1636
+ elif input_ids.shape[1] != cache_position.shape[0]: # Default case (the "else", a no op, is Exception 2)
1637
+ input_ids = input_ids[:, cache_position]
1638
+ else:
1639
+ past_key_values = HybridMambaAttentionDynamicCache(
1640
+ self.config, input_ids.shape[0], self.dtype, device=self.device
1641
+ )
1642
+
1643
+ if attention_mask is not None and position_ids is None:
1644
+ # create position_ids on the fly for batch generation
1645
+ position_ids = attention_mask.long().cumsum(-1) - 1
1646
+ position_ids.masked_fill_(attention_mask == 0, 1)
1647
+ if not empty_past_kv:
1648
+ position_ids = position_ids[:, -input_ids.shape[1] :]
1649
+
1650
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
1651
+ if inputs_embeds is not None and empty_past_kv:
1652
+ model_inputs = {"inputs_embeds": inputs_embeds}
1653
+ else:
1654
+ model_inputs = {"input_ids": input_ids.contiguous()} # `contiguous()` needed for compilation use cases
1655
+
1656
+ model_inputs.update(
1657
+ {
1658
+ "position_ids": position_ids,
1659
+ "past_key_values": past_key_values,
1660
+ "use_cache": use_cache,
1661
+ "attention_mask": attention_mask,
1662
+ "logits_to_keep": self.config.num_logits_to_keep,
1663
+ "cache_position": cache_position,
1664
+ }
1665
+ )
1666
+ return model_inputs
1667
+
1668
+ @add_start_docstrings_to_model_forward(NEMOTRONH_INPUTS_DOCSTRING)
1669
+ @add_code_sample_docstrings(
1670
+ checkpoint=_CHECKPOINT_FOR_DOC,
1671
+ output_type=NemotronHCausalLMOutput,
1672
+ config_class=_CONFIG_FOR_DOC,
1673
+ )
1674
+ def forward(
1675
+ self,
1676
+ input_ids: Optional[torch.LongTensor] = None,
1677
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1678
+ position_ids: Optional[torch.LongTensor] = None,
1679
+ cache_params: Optional[HybridMambaAttentionDynamicCache] = None,
1680
+ labels: Optional[torch.LongTensor] = None,
1681
+ output_attentions: Optional[bool] = None,
1682
+ output_hidden_states: Optional[bool] = None,
1683
+ return_dict: Optional[bool] = None,
1684
+ use_cache: Optional[bool] = None,
1685
+ cache_position: Optional[torch.Tensor] = None,
1686
+ attention_mask: Optional[torch.Tensor] = None,
1687
+ **kwargs, # for now we need this for generation
1688
+ ) -> Union[Tuple, NemotronHCausalLMOutput]:
1689
+ r"""
1690
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1691
+ Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set
1692
+ `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100`
1693
+ are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]`
1694
+ """
1695
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1696
+
1697
+ output_hidden_states = (
1698
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1699
+ )
1700
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1701
+
1702
+ nemotron_h_outputs = self.backbone(
1703
+ input_ids,
1704
+ cache_params=cache_params,
1705
+ inputs_embeds=inputs_embeds,
1706
+ output_attentions=output_attentions,
1707
+ output_hidden_states=output_hidden_states,
1708
+ return_dict=return_dict,
1709
+ use_cache=use_cache,
1710
+ cache_position=cache_position,
1711
+ attention_mask=attention_mask,
1712
+ )
1713
+ hidden_states = nemotron_h_outputs[0]
1714
+
1715
+ # TODO: Check zamba_modeling.py: https://github.com/huggingface/transformers/blob/d7188ba600e36d3fd191b12e19f1b3bb81a8404f/src/transformers/models/zamba/modeling_zamba.py#L1284C1-L1286C2
1716
+ #logits = self.lm_head(hidden_states.to(self.lm_head.weight.dtype)).float()
1717
+ logits = self.lm_head(hidden_states.to(self.lm_head.weight.dtype)).float()
1718
+
1719
+ loss = None
1720
+ if labels is not None:
1721
+ # move labels to correct device to enable model parallelism
1722
+ labels = labels.to(logits.device)
1723
+ # Shift so that tokens < n predict n
1724
+ shift_logits = logits[..., :-1, :].contiguous()
1725
+ shift_labels = labels[..., 1:].contiguous()
1726
+ # Flatten the tokens
1727
+ loss_fct = CrossEntropyLoss()
1728
+ loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
1729
+
1730
+ if not return_dict:
1731
+ output = (logits,) + nemotron_h_outputs[1:]
1732
+ return ((loss,) + output) if loss is not None else output
1733
+
1734
+ return NemotronHCausalLMOutput(
1735
+ loss=loss,
1736
+ logits=logits,
1737
+ cache_params=nemotron_h_outputs.cache_params,
1738
+ hidden_states=nemotron_h_outputs.hidden_states,
1739
+ attentions=nemotron_h_outputs.attentions,
1740
+ )
nemo-evaluator-launcher-configs/local_nvidia-nemotron-3-nano-30b-a3b-base.yaml ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SPDX-FileCopyrightText: Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
2
+ # SPDX-License-Identifier: Apache-2.0
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ #
16
+ #
17
+ # How to use:
18
+ #
19
+ # 1. copy this file locally or clone the repository
20
+ # 2. (optional) uncomment limit_samples in the config file to run with 10 samples for quick testing
21
+ # 3. export your HF token in the terminal (some benchmark datasets might be gated)
22
+ # 4. run `nemo-evaluator-launcher run --config path/to/local_nvidia-nemotron-nano-3-30b-a3b-base.yaml`
23
+ #
24
+ # ⚠️ WARNING:
25
+ # Always run full evaluations (without limit_samples) for actual benchmark results.
26
+ # Using a subset of samples is solely for testing configuration and setup.
27
+ # Results from such test runs should NEVER be used to compare models or
28
+ # report benchmark performance.
29
+ defaults:
30
+ - execution: local
31
+ - deployment: vllm
32
+ - _self_
33
+
34
+ execution:
35
+ output_dir: NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16
36
+ # mode: sequential # enables sequential execution
37
+
38
+ # specify deployment arguments
39
+ deployment:
40
+ image: vllm/vllm-openai:v0.12.0
41
+ checkpoint_path: null
42
+ hf_model_handle: nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16
43
+ served_model_name: nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16
44
+ tensor_parallel_size: 1
45
+ data_parallel_size: 1
46
+ extra_args: "--max-model-len 262144 --mamba_ssm_cache_dtype float32 --no-enable-prefix-caching"
47
+
48
+ # specify the benchmarks to evaluate
49
+ evaluation:
50
+ env_vars:
51
+ HF_TOKEN: HF_TOKEN
52
+ nemo_evaluator_config: # global config settings that apply to all tasks
53
+ config:
54
+ params:
55
+ max_retries: 5 # number of retries for API requests
56
+ request_timeout: 360 # timeout for API requests in seconds
57
+ parallelism: 4 # number of parallel requests
58
+ # limit_samples: 10 # uncomment to limit number of samples for quick testing
59
+ extra:
60
+ tokenizer: nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16
61
+ tokenizer_backend: huggingface
62
+ tasks:
63
+ - name: adlr_mmlu_pro_5_shot_base
64
+ - name: adlr_mmlu
65
+ - name: adlr_agieval_en_cot
66
+ - name: adlr_humaneval_greedy
67
+ - name: adlr_mbpp_sanitized_3_shot_greedy
68
+ - name: adlr_gsm8k_cot_8_shot
69
+ - name: adlr_minerva_math_nemo_4_shot
70
+ - name: adlr_math_500_4_shot_sampled
71
+ - name: adlr_arc_challenge_llama_25_shot
72
+ - name: hellaswag
73
+ - name: openbookqa
74
+ - name: piqa
75
+ - name: adlr_race
76
+ - name: adlr_winogrande_5_shot
77
+ - name: adlr_global_mmlu_lite_5_shot
78
+ - name: adlr_mgsm_native_cot_8_shot
nemo-evaluator-launcher-configs/local_qwen3-30b-a3b-base.yaml ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SPDX-FileCopyrightText: Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
2
+ # SPDX-License-Identifier: Apache-2.0
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ #
16
+ #
17
+ # How to use:
18
+ #
19
+ # 1. copy this file locally
20
+ # 2. (optional) uncomment limit_samples in the config file to run with 10 samples for quick testing
21
+ # 3. export your HF token in the terminal (some benchmark datasets might be gated)
22
+ # 4. run `nemo-evaluator-launcher run --config path/to/local_qwen3-30b-a3b-base.yaml`
23
+ #
24
+ # ⚠️ WARNING:
25
+ # Always run full evaluations (without limit_samples) for actual benchmark results.
26
+ # Using a subset of samples is solely for testing configuration and setup.
27
+ # Results from such test runs should NEVER be used to compare models or
28
+ # report benchmark performance.
29
+ defaults:
30
+ - execution: local
31
+ - deployment: vllm
32
+ - _self_
33
+
34
+ execution:
35
+ output_dir: Qwen3-30B-A3B-Base
36
+ # mode: sequential # enables sequential execution
37
+
38
+ # specify deployment arguments
39
+ deployment:
40
+ image: vllm/vllm-openai:v0.11.0
41
+ checkpoint_path: null
42
+ hf_model_handle: Qwen/Qwen3-30B-A3B-Base
43
+ served_model_name: Qwen/Qwen3-30B-A3B-Base
44
+ tensor_parallel_size: 1
45
+ data_parallel_size: 1
46
+
47
+ # specify the benchmarks to evaluate
48
+ evaluation:
49
+ env_vars:
50
+ HF_TOKEN: HF_TOKEN
51
+ nemo_evaluator_config: # global config settings that apply to all tasks
52
+ config:
53
+ params:
54
+ max_retries: 5 # number of retries for API requests
55
+ request_timeout: 360 # timeout for API requests in seconds
56
+ parallelism: 4 # number of parallel requests
57
+ # limit_samples: 10 # uncomment to limit number of samples for quick testing
58
+ extra:
59
+ tokenizer: Qwen/Qwen3-30B-A3B-Base
60
+ tokenizer_backend: huggingface
61
+ tasks:
62
+ - name: adlr_mmlu_pro_5_shot_base
63
+ - name: adlr_mmlu
64
+ - name: adlr_agieval_en_cot
65
+ - name: adlr_humaneval_greedy
66
+ - name: adlr_mbpp_sanitized_3_shot_greedy
67
+ - name: adlr_gsm8k_cot_8_shot
68
+ - name: adlr_minerva_math_nemo_4_shot
69
+ - name: adlr_math_500_4_shot_sampled
70
+ - name: adlr_arc_challenge_llama_25_shot
71
+ - name: hellaswag
72
+ - name: openbookqa
73
+ - name: piqa
74
+ - name: adlr_race
75
+ - name: adlr_winogrande_5_shot
76
+ - name: adlr_global_mmlu_lite_5_shot
77
+ - name: adlr_mgsm_native_cot_8_shot
privacy.md ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ | Field | Response |
2
+ | :---- | :---- |
3
+ | Generatable or reverse engineerable personal data? | No |
4
+ | Personal data used to create this model? | No |
5
+ | Was consent obtained for any personal data used? | Not Applicable |
6
+ | A description of any methods implemented in data acquisition or processing, if any, to address the prevalence of personal data in the training data, where relevant and applicable. | We used only prompts that do not contain any personal data for synthetic data generation. |
7
+ | How often is the dataset reviewed? | Before Release |
8
+ | Is there provenance for all datasets used in training? | Yes |
9
+ | Does data labeling (annotation, metadata) comply with privacy laws? | Yes |
10
+ | Is data compliant with data subject requests for data correction or removal, if such a request was made? | No, not possible with externally-sourced data. |
11
+ | Applicable Privacy Policy | [NVIDIA Privacy Policy](https://www.nvidia.com/en-us/about-nvidia/privacy-policy/) |
12
+ | During AI model development, strict adherence to copyright policy ensured compliance through risk mitigation and legal reviews. Post-data collection, reserved rights content is identified and removed, with verified opt-out processes for rightsholders. Detailed records document due diligence and transparency. | True |
13
+ | We employ automated tools and data processing techniques during pre-training to identify and filter certain categories of personal information. Scans of training datasets detected no PII. | True. We employ automated tools and data processing techniques to scan for Personally Identifiable Information (PII) during pre-training to identify and filter certain categories of personal information, including public-facing contact details such as email addresses and phone numbers. Scans of Common Crawl, CC-News, and Wikimedia datasets did not detect PII in the majority of samples. However, Microsoft Presidio indicated potential findings including business contact information embedded in natural language, such as email addresses and phone numbers. These were removed using verified instances of PII through a combination of automated filtering and human-in-the-loop validation. This evaluation used a 3,000-sample subset per dataset, identified as the optimal threshold for maximizing embedder accuracy. |
safety.md ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ | Field | Response |
2
+ | :---- | :---- |
3
+ | Model Application Field(s): | Chat, Instruction Following, Chatbot Development, Code Generation, Reasoning, Customer Service |
4
+ | Describe the life critical impact (if present). | Not Applicable |
5
+ | Description of methods implemented in data acquisition or processing, if any, to address other types of potentially harmful data in the training, testing, and validation data: | We used a guard model for content safety to exclude potentially harmful data from training. |
6
+ | Description of any methods implemented in data acquisition or processing, if any, to address illegal or harmful content in the training data, including, but not limited to, child sexual abuse material (CSAM) and non-consensual intimate imagery (NCII) | We used a Gemma-3 4B-based guard model trained on [Nemotron Content Safety Dataset v2](https://huggingface.co/datasets/nvidia/Aegis-AI-Content-Safety-Dataset-2.0) for content safety to exclude potentially illegal or harmful content from the training. |
7
+ | Use Case Restrictions: | Abide by the [NVIDIA Open Model License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/). |
8
+ | Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to. |
9
+ | This AI model was developed based on our policies to ensure responsible data handling and risk mitigation. The datasets used for training have been scanned for harmful content and illegal content, consistent with our policies including scanning for Child Sexual Abuse Material (CSAM). Ongoing review and monitoring mechanisms are in place based on our policies and to maintain data integrity. | True. We use [Nemotron Content Safety Dataset V2](https://huggingface.co/datasets/nvidia/Aegis-AI-Content-Safety-Dataset-2.0) and an internal safety dataset specialized for minority sexuality for content safety evaluation to ensure the safety of this model. |
special_tokens_map.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|im_end|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "unk_token": {
17
+ "content": "<unk>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ }
23
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:623c34567aebb18582765289fbe23d901c62704d6518d71866e0e58db892b5b7
3
+ size 17077484
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff