UniMER_Dataset / README.md
wanderkid's picture
Fix formula image-latex mismatch
ad57845
metadata
license: apache-2.0
language:
  - en
  - zh
pretty_name: UniMER_Dataset
tags:
  - data
  - math
  - MER
size_categories:
  - 1M<n<10M

UniMER Dataset

For detailed instructions on using the dataset, please refer to the project homepage: UniMERNet Homepage

Introduction

The UniMER dataset is a specialized collection curated to advance the field of Mathematical Expression Recognition (MER). It encompasses the comprehensive UniMER-1M training set, featuring over one million instances that represent a diverse and intricate range of mathematical expressions, coupled with the UniMER Test Set, meticulously designed to benchmark MER models against real-world scenarios. The dataset details are as follows:

  • UniMER-1M Training Set:

    • Total Samples: 1,061,791 Latex-Image pairs
    • Composition: A balanced mix of concise and complex, extended formula expressions
    • Aim: To train robust, high-accuracy MER models, enhancing recognition precision and generalization
  • UniMER Test Set:

    • Total Samples: 23,757, categorized into four types of expressions:
      • Simple Printed Expressions (SPE): 6,762 samples
      • Complex Printed Expressions (CPE): 5,921 samples
      • Screen Capture Expressions (SCE): 4,742 samples
      • Handwritten Expressions (HWE): 6,332 samples
    • Purpose: To provide a thorough evaluation of MER models across a spectrum of real-world conditions

Visual Data Samples

UniMER-Test

Data Statistics

Dataset Sub Source Sample Size
UniMER-1M Pix2tex Train 158,303
Arxiv † 820,152
CROHME Train 8,834
HME100K Train ‡ 74,502
UniMER-Test SPE Pix2tex Validation 6,762
CPE Arxiv † 5,921
SCE PDF Screenshot † 4,742
HWE CROHME & HME100K 6,332

† Indicates data collected, processed, and annotated by our team.
‡ For copyright compliance, please manually download this dataset portion: HME100K dataset.

Acknowledgements

We would like to express our gratitude to the creators of the Pix2tex, CROHME, and HME100K datasets. Their foundational work has significantly contributed to the development of the UniMER dataset.

Citations

@misc{wang2024unimernet,
      title={UniMERNet: A Universal Network for Real-World Mathematical Expression Recognition}, 
      author={Bin Wang and Zhuangcheng Gu and Chao Xu and Bo Zhang and Botian Shi and Conghui He},
      year={2024},
      eprint={2404.15254},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

@misc{conghui2022opendatalab,
    author={He, Conghui and Li, Wei and Jin, Zhenjiang and Wang, Bin and Xu, Chao and Lin, Dahua},
    title={OpenDataLab: Empowering General Artificial Intelligence with Open Datasets},
    howpublished = {\url{https://opendatalab.com}},
    year={2022}
}

UniMER 数据集

数据集使用详细说明请参考项目主页:UniMERNet 主页

简介

UniMER数据集是专门为通用数学表达式识别(MER)发布的数据集。它包含了真实全面的UniMER-1M训练集,拥有超过一百万个代表广泛和复杂数学表达式的实例,以及精心设计的UniMER测试集,用于在真实世界场景中评估MER模型。数据集详情如下:

  • UniMER-1M 训练集:

    • 总样本数:1,061,791
    • 组成:简洁与复杂、扩展公式表达式的平衡融合
    • 目标:帮助训练鲁棒性强、高精度的MER模型,增强识别准确性和模型泛化能力
  • UniMER 测试集:

    • 总样本数:23,757,分为四种表达式类型:
      • 简单印刷表达式(SPE):6,762 个样本
      • 复杂印刷表达式(CPE):5,921 个样本
      • 屏幕截图表达式(SCE):4,742 个样本
      • 手写表达式(HWE):6,332 个样本
    • 目的:为MER模型提供一个全面的评估平台,以准确评估真实场景下各类公式识别能力

视觉数据样本

UniMER-测试集

数据统计

数据集 子集 来源 样本数量
UniMER-1M Pix2tex 训练集 158,303
Arxiv † 820,152
CROHME 训练集 8,834
HME100K 训练集 ‡ 74,502
UniMER-测试集 SPE Pix2tex 验证集 6,762
CPE Arxiv † 5,921
SCE PDF 截图 † 4,742
HWE CROHME & HME100K 6,332

† 表示由我们团队收集、处理和注释的数据。
‡ 由于版权合规,请手动下载此部分数据集:HME100K 数据集

致谢

我们对Pix2tex, CROHMEHME100K 数据集的创建者表示感谢。他们的基础工作对 UniMER 数据集的构建及发布做出了重大贡献。

引用

@misc{wang2024unimernet,
      title={UniMERNet: A Universal Network for Real-World Mathematical Expression Recognition}, 
      author={Bin Wang and Zhuangcheng Gu and Chao Xu and Bo Zhang and Botian Shi and Conghui He},
      year={2024},
      eprint={2404.15254},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

@misc{conghui2022opendatalab,
    author={He, Conghui and Li, Wei and Jin, Zhenjiang and Wang, Bin and Xu, Chao and Lin, Dahua},
    title={OpenDataLab: Empowering General Artificial Intelligence with Open Datasets},
    howpublished = {\url{https://opendatalab.com}},
    year={2022}
}