boxinw@nvidia.com commited on
Commit
90e577d
1 Parent(s): 6e85032

Add README

Browse files
Files changed (1) hide show
  1. README.md +183 -0
README.md ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ language:
4
+ - en
5
+ pipeline_tag: image-text-to-text
6
+ tags:
7
+ - nvidia
8
+ - NVLM
9
+ - pytorch
10
+ - multimodal
11
+ - conversational
12
+ library_name: transformers
13
+ ---
14
+
15
+ <p align="center">
16
+ <img src="nvlm-logo-light.png" alt="Image Description" width="300" >
17
+ </p>
18
+
19
+
20
+ # Model Overview
21
+
22
+ ## Description
23
+ This family of models performs vision-language and text-only tasks including optical character recognition, multimodal reasoning, localization, common sense reasoning, world knowledge utilization, and coding.
24
+
25
+ ## License/Terms of Use
26
+ [Creative Commons Attribution: Non-Commercial 4.0 International](https://spdx.org/licenses/CC-BY-NC-4.0) <br>
27
+
28
+ # Model Details
29
+
30
+ Today (September 17th, 2024), we introduce [NVLM 1.0](https://arxiv.org/abs/2409.11402), a family of frontier-class multimodal large language models (LLMs) that achieve state-of-the-art results on vision-language tasks, rivaling the leading proprietary models (e.g., GPT-4o) and open-access models (e.g., Llama 3-V 405B and InternVL 2). Remarkably, NVLM 1.0 shows improved text-only performance over its LLM backbone after multimodal training.
31
+
32
+ In this repo, we are open-sourcing NVLM-1.0-D-72B-mcore (decoder-only architecture), the decoder-only model weights for the community. The model is trained through [Megatron-Core](https://github.com/NVIDIA/Megatron-LM/tree/main/examples/multimodal/nvlm).
33
+
34
+
35
+
36
+ ## Reference(s)
37
+ [Paper](https://arxiv.org/abs/2409.11402) &ensp; [Inference Code (HF)](https://huggingface.co/nvidia/NVLM-D-72B/tree/main) &ensp; [Training Code](https://github.com/NVIDIA/Megatron-LM/tree/main/examples/multimodal/nvlm) &ensp; [Website](https://research.nvidia.com/labs/adlr/NVLM-1/)
38
+
39
+ ## Benchmark Results
40
+ We train our model with [Megatron-Core](https://github.com/NVIDIA/Megatron-LM/tree/main/examples/multimodal/nvlm) and adapt the codebase to Huggingface for model hosting, reproducibility, and inference.
41
+ We observe numerical differences between the Megatron and Huggingface codebases, which are within the expected range of variation.
42
+ We provide the results from both the Huggingface codebase and the Megatron codebase for reproducibility and comparison with other models.
43
+
44
+ Results (as of September 17th, 2024) in the multimodal benchmarks are as follows:
45
+
46
+ ### Vision-language Benchmarks
47
+
48
+ | Benchmark | MMMU (val / test) | MathVista | OCRBench | AI2D | ChartQA | DocVQA | TextVQA | RealWorldQA | VQAv2 |
49
+ |------------------------------|-------------------|-----------|----------|------|---------|--------|---------|-------------|-------|
50
+ | NVLM-D 1.0 72B (Megatron-Core) | 59.9 / 54.1 | 67.4 | 851 | 94.4 | 86.9 | 92.1 | 81.2 | 66.8 | 85.4 |
51
+ | Llama 3.2 90B | 60.3 / - | 57.3 | - | 92.3 | 85.5 | 90.1 | - | - | 78.1 |
52
+ | Llama 3-V 70B | 60.6 / - | - | - | 93.0 | 83.2 | 92.2 | 83.4 | - | 79.1 |
53
+ | Llama 3-V 405B | 64.5 / - | - | - | 94.1 | 85.8 | 92.6 | 84.8 | - | 80.2 |
54
+ | InternVL2-Llama3-76B | 55.2 / - | 65.5 | 839 | 94.8 | 88.4 | 94.1 | 84.4 | 72.2 | - |
55
+ | GPT-4V | 56.8 / 55.7 | 49.9 | 645 | 78.2 | 78.5 | 88.4 | 78.0 | 61.4 | 77.2 |
56
+ | GPT-4o | 69.1 / - | 63.8 | 736 | 94.2 | 85.7 | 92.8 | - | - | - |
57
+ | Claude 3.5 Sonnet | 68.3 / - | 67.7 | 788 | 94.7 | 90.8 | 95.2 | - | - | - |
58
+ | Gemini 1.5 Pro (Aug 2024) | 62.2 / - | 63.9 | 754 | 94.4 | 87.2 | 93.1 | 78.7 | 70.4 | 80.2 |
59
+
60
+
61
+ ## Model Architectures
62
+
63
+ **Network Architecture:** Decoder-Only Transformer
64
+
65
+ ### Input
66
+ **Input Type(s):** Text, Image <br>
67
+ **Input Format(s):** String, [Pillow Library-Supported Formats](https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html) <br>
68
+ **Input Dimensions:** One-Dimensional (1D), Two Dimensional (2D) <br>
69
+ **Other Properties Related to Input:** Maximum Token Length = 128K Tokens <br>
70
+
71
+ ### Output
72
+ **Output Type(s):** Text <br>
73
+ **Output Format:** String <br>
74
+ **Model Output:** 1D <br>
75
+ **Other Properties Related to Output:** None <br>
76
+
77
+ ## How to use
78
+
79
+ For training code, please refer to [Megatron-LM](https://github.com/NVIDIA/Megatron-LM/tree/main/examples/multimodal/nvlm).
80
+
81
+
82
+ ### Prepare the environment
83
+
84
+ We provide a docker build file in the [Dockerfile](https://github.com/NVIDIA/Megatron-LM/blob/main/examples/multimodal/Dockerfile) for reproduction.
85
+
86
+ ### Evaluation
87
+
88
+ Run the [text generation script](https://github.com/NVIDIA/Megatron-LM/blob/main/examples/multimodal/nvlm/run_text_generation_qwen20_72b_internvit_6b.sh).
89
+
90
+ ```
91
+ examples/multimodal/nvlm/run_text_generation_qwen20_72b_internvit_6b.sh --input-image-path /path/to/input/images --output-path /some/output/directory \
92
+ --model-path /path/to/model.pt --gt-path /path/to/groundtruth/file --task generation-task-name --use-tiling
93
+ ```
94
+
95
+ where `--task generation-task-name` is the name of the evaluation benchmark such as `captioning`, `MMMU` or `TextVQA`.
96
+
97
+ Then, run one of the evaluation scripts from `examples/multimodal`. For example
98
+
99
+ ```
100
+ python examples/multimodal/evaluate_mmmu.py --input-path /output/directory/from/generation
101
+ ```
102
+
103
+ ## Software Integration
104
+ **Runtime Engine(s)**
105
+ * PyTorch <br>
106
+
107
+ **Supported Hardware Microarchitecture Compatibility:** <br>
108
+ * NVIDIA Hopper <br>
109
+
110
+ **[Preferred/Supported] Operating System(s):** <br>
111
+ * Linux <br>
112
+
113
+ ## Inference
114
+ **Engine:** PyTorch <br>
115
+ **Test Hardware:** <br>
116
+ * H100 <br>
117
+
118
+ ## Model Version(s)
119
+ * v1.0-D (NVLM-D)
120
+
121
+ ## Training, Testing, and Evaluation Datasets
122
+
123
+ ### Pre-Training Dataset
124
+
125
+ **Link** <br>
126
+ * [See Table 4](https://arxiv.org/abs/2409.11402) <br>
127
+
128
+ **Data Collection Method by dataset** <br>
129
+ * Hybrid: Automated, Human, Synthetic, Unknown <br>
130
+
131
+ **Labeling Method by dataset** <br>
132
+ * Hybrid: Automated, Human, Synthetic, Unknown <br>
133
+
134
+ **Properties**
135
+ * Trained on image captions, image-text pairs, natural images, charts, documents, scene descriptions, and mathematical reasoning. <br>
136
+
137
+ ### Supervised Fine-Tuning Dataset
138
+ **Link** <br>
139
+ * [See Table 6](https://arxiv.org/abs/2409.11402) <br>
140
+
141
+ **Data Collection Method by dataset** <br>
142
+ * Hybrid: Automated, Human, Synthetic, Unknown <br>
143
+
144
+ **Labeling Method by dataset** <br>
145
+ * Hybrid: Automated, Human, Synthetic, Unknown <br>
146
+
147
+ **Properties**
148
+ * Trained on image captions; general knowledge; image-text pairs; natural images; charts; diagrams; documents; scene descriptions; science diagrams, lessons, textbook data, and question-answer pairs; visual instruction tuning; and mathematical reasoning. <br>
149
+
150
+ ### Evaluation Dataset
151
+ **Link** <br>
152
+ * [See Section 6.1, "Benchmark"](https://arxiv.org/abs/2409.11402) <br>
153
+
154
+ **Data collection method by dataset** <br>
155
+ * Human <br>
156
+
157
+ **Labeling method by dataset** <br>
158
+ * Human <br>
159
+
160
+ **Properties** <br>
161
+ * Evaluated on general knowledge, visual answering, chart understanding, table, optical character recognition, and mathematical reasoning. <br>
162
+
163
+
164
+ ## Correspondence to
165
+ Wenliang Dai* (wdai@nvidia.com), Nayeon Lee* (nayeonl@nvidia.com), Boxin Wang* (boxinw@nvidia.com), Zhuolin Yang* (zhuoliny@nvidia.com), Wei Ping* (wping@nvidia.com)
166
+
167
+ *Equal contribution
168
+
169
+ ## Citation
170
+ <pre>
171
+ @article{nvlm2024,
172
+ title={NVLM: Open Frontier-Class Multimodal LLMs},
173
+ author={Dai, Wenliang and Lee, Nayeon and Wang, Boxin and Yang, Zhuolin and Liu, Zihan and Barker, Jon and Rintamaki, Tuomas and Shoeybi, Mohammad and Catanzaro, Bryan and Ping, Wei},
174
+ journal={arXiv preprint},
175
+ year={2024}}
176
+ </pre>
177
+
178
+
179
+ ## Ethical Considerations
180
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
181
+
182
+ Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
183
+