de-Rodrigo commited on
Commit
40732c7
โ€ข
1 Parent(s): 0baebf1

Debug README

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -121,11 +121,11 @@ tags:
121
  <img src="figs/merit-dataset.png" alt="Visual Abstract" width="500" />
122
  </p>
123
 
124
- # The MERIT Dataset :school_satchel::page_with_curl::trophy:
125
 
126
  The MERIT Dataset is a multimodal dataset (image + text + layout) designed for training and benchmarking Large Language Models (LLMs) on Visually Rich Document Understanding (VrDU) tasks. It is a fully labeled synthetic dataset and you can access the generation pipeline on [GitHub](https://github.com/nachoDRT/MERIT-Dataset).
127
 
128
- ## Introduction :information_source:
129
  AI faces some dynamic and technical issues that push end-users to create and gather their own data. In addition, multimodal LLMs are gaining more and more attention, but datasets to train them might be improved to be more complex, more flexible, and easier to gather/generate.
130
 
131
  In this research project, we identify school transcripts of records as a suitable niche to generate a synthetic challenging multimodal dataset (image + text + layout) for Token Classification or Sequence Generation.
@@ -135,7 +135,7 @@ In this research project, we identify school transcripts of records as a suitabl
135
  </p>
136
 
137
 
138
- ## Hardware :gear:
139
  We ran the dataset generator on an MSI Meg Infinite X 10SF-666EU with an Intel Core i9-10900KF and an Nvidia RTX 2080 GPU, running on Ubuntu 20.04. Energy values in the table refer to 1k samples, and time values refer to one sample.
140
 
141
  | Task | Energy (kWh) | Time (s) |
@@ -144,7 +144,7 @@ We ran the dataset generator on an MSI Meg Infinite X 10SF-666EU with an Intel C
144
  | Modify samples in Blender | 0.366 | 34 |
145
 
146
 
147
- ## Benchmark :muscle:
148
 
149
  We train the LayoutLM family models on Token Classification to demonstrate the suitability of our dataset. The MERIT Dataset poses a challenging scenario with more than 400 labels.
150
 
 
121
  <img src="figs/merit-dataset.png" alt="Visual Abstract" width="500" />
122
  </p>
123
 
124
+ # The MERIT Dataset ๐ŸŽ’๐Ÿ“ƒ๐Ÿ†
125
 
126
  The MERIT Dataset is a multimodal dataset (image + text + layout) designed for training and benchmarking Large Language Models (LLMs) on Visually Rich Document Understanding (VrDU) tasks. It is a fully labeled synthetic dataset and you can access the generation pipeline on [GitHub](https://github.com/nachoDRT/MERIT-Dataset).
127
 
128
+ ## Introduction โ„น๏ธ
129
  AI faces some dynamic and technical issues that push end-users to create and gather their own data. In addition, multimodal LLMs are gaining more and more attention, but datasets to train them might be improved to be more complex, more flexible, and easier to gather/generate.
130
 
131
  In this research project, we identify school transcripts of records as a suitable niche to generate a synthetic challenging multimodal dataset (image + text + layout) for Token Classification or Sequence Generation.
 
135
  </p>
136
 
137
 
138
+ ## Hardware โš™๏ธ
139
  We ran the dataset generator on an MSI Meg Infinite X 10SF-666EU with an Intel Core i9-10900KF and an Nvidia RTX 2080 GPU, running on Ubuntu 20.04. Energy values in the table refer to 1k samples, and time values refer to one sample.
140
 
141
  | Task | Energy (kWh) | Time (s) |
 
144
  | Modify samples in Blender | 0.366 | 34 |
145
 
146
 
147
+ ## Benchmark ๐Ÿ’ช
148
 
149
  We train the LayoutLM family models on Token Classification to demonstrate the suitability of our dataset. The MERIT Dataset poses a challenging scenario with more than 400 labels.
150