--- license: openrail task_categories: - image-to-text tags: - ocr - latex-ocr - image2latex dataset_info: - config_name: cleaned_formulas features: - name: image dtype: image - name: latex_formula dtype: string splits: - name: train num_bytes: 2918992848.46 num_examples: 552340 download_size: 2778067493 dataset_size: 2918992848.46 - config_name: raw_formulas features: - name: latex_formula dtype: string splits: - name: train num_bytes: 240965616 num_examples: 1006245 download_size: 89507618 dataset_size: 240965616 configs: - config_name: cleaned_formulas data_files: - split: train path: cleaned_formulas/train-* - config_name: raw_formulas data_files: - split: train path: raw_formulas/train-* --- # 𝑩𝑰𝑮 𝑵𝑬𝑾𝑺‼️ 📮 [2𝟎2𝟒-𝟎2] We trained a formula recognition model, [𝐓𝐞𝐱𝐓𝐞𝐥𝐥𝐞𝐫](https://github.com/OleehyO/TexTeller?tab=readme-ov-file), using the latex-formulas dataset. It can convert LaTeX formulas into images and boasts **high accuracy** and **strong generalization capabilities**, covering **most formula recognition scenarios**. > For more details, please refer to the [𝐓𝐞𝐱𝐓𝐞𝐥𝐥𝐞𝐫 GitHub repository](https://github.com/OleehyO/TexTeller?tab=readme-ov-file). # Dataset Description > [中文版本](./README_zh.md) There are two datasets: **raw_formulas** and **cleaned_formulas**(This dataset has **550K formula-image pairs**). We scraped approximately 1 million LaTeX formula image-text pairs from *arxiv* that were uncleaned and without text segmentation to create the *raw_formulas* dataset. After cleaning the *raw_formulas* dataset and integrating it with the [im2latex-100K](https://zenodo.org/records/56198#.V2px0jXT6eA) dataset, we obtained the *cleaned_formulas* dataset, which has **550K** formula-image pairs. To render the images corresponding to the formulas, the following external packages are needed: * amsmath * amsfonts * amssymb * mathtools ## Usage for **raw_formulas** dataset: ```python from datasets import load_dataset data = load_dataset("OleehyO/latex-formulas", "raw_formulas") ``` for **cleaned_formulas** dataset: ```python from datasets import load_dataset data = load_dataset("OleehyO/latex-formulas", "cleaned_formulas") ``` ## Details About the *raw_formulas* Dataset We scraped LaTeX formulas containing the following environments: * equation * align * align* * gather * gather* The formulas do not include the following content: * \label * % * \quad * \qquad * \vspace * \hspace * \resizebox * \scalebox * \rotatebox * \parbox * \fbox * \makebox * \raisebox * \addvspace * \hfill * \vfill * \textwidth * \textheight * \rule ## Preprocessing Details of the *cleaned_formulas* Dataset ### Cleaning * We removed some useless junk data from both *raw_formulas* and [im2latex-100K](https://zenodo.org/records/56198#.V2px0jXT6eA). * We deleted overly complex formulas from both *raw_formulas* and [im2latex-100K](https://zenodo.org/records/56198#.V2px0jXT6eA): * Formulas were deleted if the aspect ratio of the corresponding rendered image was greater than 0.8. * Formulas with a character length greater than 200 were deleted. * In the formulas from both *raw_formulas* and [im2latex-100K](https://zenodo.org/records/56198#.V2px0jXT6eA), the following content was removed: * \tag * \text * \begin{split} * \end{split} * \nonumber * \notag * The `equation`, `equation*`, `align`, `\[...\]` environments in *raw_formulas* were all replaced with the `align*` environment. * We deleted formulas from *raw_formulas* that contained custom macros.