|
--- |
|
license: apache-2.0 |
|
size_categories: |
|
- 100K<n<1M |
|
task_categories: |
|
- image-to-text |
|
dataset_info: |
|
- config_name: default |
|
features: |
|
- name: image |
|
dtype: image |
|
- name: text |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 392473380.05 |
|
num_examples: 76318 |
|
download_size: 383401054 |
|
dataset_size: 392473380.05 |
|
- config_name: full |
|
features: |
|
- name: image |
|
dtype: image |
|
- name: text |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 385291867 |
|
num_examples: 76318 |
|
- name: validation |
|
num_bytes: 43364061.55 |
|
num_examples: 8475 |
|
- name: test |
|
num_bytes: 47643036.303 |
|
num_examples: 9443 |
|
download_size: 473618552 |
|
dataset_size: 483485587.878 |
|
- config_name: human_handwrite |
|
features: |
|
- name: image |
|
dtype: image |
|
- name: text |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 16181778 |
|
num_examples: 1200 |
|
- name: validation |
|
num_bytes: 962283 |
|
num_examples: 68 |
|
- name: test |
|
num_bytes: 906906 |
|
num_examples: 70 |
|
download_size: 18056029 |
|
dataset_size: 18050967 |
|
- config_name: human_handwrite_print |
|
features: |
|
- name: image |
|
dtype: image |
|
- name: text |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 3152122.8 |
|
num_examples: 1200 |
|
- name: validation |
|
num_bytes: 182615 |
|
num_examples: 68 |
|
- name: test |
|
num_bytes: 181698 |
|
num_examples: 70 |
|
download_size: 1336052 |
|
dataset_size: 3516435.8 |
|
- config_name: small |
|
features: |
|
- name: image |
|
dtype: image |
|
- name: text |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 261296 |
|
num_examples: 50 |
|
- name: validation |
|
num_bytes: 156489 |
|
num_examples: 30 |
|
- name: test |
|
num_bytes: 156489 |
|
num_examples: 30 |
|
download_size: 588907 |
|
dataset_size: 574274 |
|
- config_name: synthetic_handwrite |
|
features: |
|
- name: image |
|
dtype: image |
|
- name: text |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 496610333.066 |
|
num_examples: 76266 |
|
- name: validation |
|
num_bytes: 63147351.515 |
|
num_examples: 9565 |
|
- name: test |
|
num_bytes: 62893132.805 |
|
num_examples: 9593 |
|
download_size: 616418996 |
|
dataset_size: 622650817.3859999 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: full/train-* |
|
- config_name: full |
|
data_files: |
|
- split: train |
|
path: full/train-* |
|
- split: validation |
|
path: full/validation-* |
|
- split: test |
|
path: full/test-* |
|
- config_name: human_handwrite |
|
data_files: |
|
- split: train |
|
path: human_handwrite/train-* |
|
- split: validation |
|
path: human_handwrite/validation-* |
|
- split: test |
|
path: human_handwrite/test-* |
|
- config_name: human_handwrite_print |
|
data_files: |
|
- split: train |
|
path: human_handwrite_print/train-* |
|
- split: validation |
|
path: human_handwrite_print/validation-* |
|
- split: test |
|
path: human_handwrite_print/test-* |
|
- config_name: small |
|
data_files: |
|
- split: train |
|
path: small/train-* |
|
- split: validation |
|
path: small/validation-* |
|
- split: test |
|
path: small/test-* |
|
- config_name: synthetic_handwrite |
|
data_files: |
|
- split: train |
|
path: synthetic_handwrite/train-* |
|
- split: validation |
|
path: synthetic_handwrite/validation-* |
|
- split: test |
|
path: synthetic_handwrite/test-* |
|
tags: |
|
- code |
|
--- |
|
|
|
# LaTeX OCR 的数据仓库 |
|
|
|
本数据仓库是专为 [LaTeX_OCR](https://github.com/LinXueyuanStdio/LaTeX_OCR) 及 [LaTeX_OCR_PRO](https://github.com/LinXueyuanStdio/LaTeX_OCR) 制作的数据,来源于 `https://zenodo.org/record/56198#.V2p0KTXT6eA` 以及 `https://www.isical.ac.in/~crohme/` 以及我们自己构建。 |
|
|
|
如果这个数据仓库有帮助到你的话,请点亮 ❤️like ++ |
|
|
|
后续追加新的数据也会放在这个仓库 ~~ |
|
|
|
> 原始数据仓库在github [LinXueyuanStdio/Data-for-LaTeX_OCR](https://github.com/LinXueyuanStdio/Data-for-LaTeX_OCR). |
|
|
|
## 数据集 |
|
|
|
本仓库有 5 个数据集 |
|
|
|
1. `small` 是小数据集,样本数 110 条,用于测试 |
|
2. `full` 是印刷体约 100k 的完整数据集。实际上样本数略小于 100k,因为用 LaTeX 的抽象语法树剔除了很多不能渲染的 LaTeX。 |
|
3. `synthetic_handwrite` 是手写体 100k 的完整数据集,基于 `full` 的公式,使用手写字体合成而来,可以视为人类在纸上的手写体。样本数实际上略小于 100k,理由同上。 |
|
4. `human_handwrite` 是手写体较小数据集,更符合人类在电子屏上的手写体。主要来源于 `CROHME`。我们用 LaTeX 的抽象语法树校验过了。 |
|
5. `human_handwrite_print` 是来自 `human_handwrite` 的印刷体数据集,公式部分和 `human_handwrite` 相同,图片部分由公式用 LaTeX 渲染而来。 |
|
|
|
## 使用 |
|
|
|
加载训练集 |
|
|
|
- name 可选 small, full, synthetic_handwrite, human_handwrite, human_handwrite_print |
|
- split 可选 train, validation, test |
|
|
|
```python |
|
>>> from datasets import load_dataset |
|
>>> train_dataset = load_dataset("linxy/LaTeX_OCR", name="small", split="train") |
|
>>> train_dataset[2]["text"] |
|
\rho _ { L } ( q ) = \sum _ { m = 1 } ^ { L } \ P _ { L } ( m ) \ { \frac { 1 } { q ^ { m - 1 } } } . |
|
>>> train_dataset[2] |
|
{'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=200x50 at 0x15A5D6CE210>, |
|
'text': '\\rho _ { L } ( q ) = \\sum _ { m = 1 } ^ { L } \\ P _ { L } ( m ) \\ { \\frac { 1 } { q ^ { m - 1 } } } .'} |
|
>>> len(train_dataset) |
|
50 |
|
``` |
|
|
|
加载所有 |
|
```python |
|
>>> from datasets import load_dataset |
|
>>> dataset = load_dataset("linxy/LaTeX_OCR", name="small") |
|
>>> dataset |
|
DatasetDict({ |
|
train: Dataset({ |
|
features: ['image', 'text'], |
|
num_rows: 50 |
|
}) |
|
validation: Dataset({ |
|
features: ['image', 'text'], |
|
num_rows: 30 |
|
}) |
|
test: Dataset({ |
|
features: ['image', 'text'], |
|
num_rows: 30 |
|
}) |
|
}) |
|
``` |
|
|
|
|
|
|