The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    DataFilesNotFoundError
Message:      No (supported) data files found in nlp-tlp/MaintNorm
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 347, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1873, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1854, in dataset_module_factory
                  return HubDatasetModuleFactoryWithoutScript(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1245, in get_module
                  module_name, default_builder_kwargs = infer_module_for_data_files(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 595, in infer_module_for_data_files
                  raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else ""))
              datasets.exceptions.DataFilesNotFoundError: No (supported) data files found in nlp-tlp/MaintNorm

Need help to make the dataset viewer work? Open a discussion for direct support.

MaintNorm Dataset Card

Overview

The MaintNorm dataset is a collection of 12,000 English language texts, specifically focusing on short texts extracted from maintenance work orders from three major mining organisations in Australia. This dataset is annotated for both lexical normalization and token-level entity tagging tasks, making it a valuable resource for natural language processing research and applications in industrial contexts.

For further information about the annotation process and dataset characteristics, refer to the MaintNorm paper or vitit the GitHub repository

Dataset Structure

This dataset includes data from three distinct company-specific sources (company_a, company_b, company_c), along with a combined dataset that integrates data across these sources. This structure supports both granular and comprehensive analyses.

Masking Scheme

To address privacy and data specificity, the following token-level entity tags are used:

  • <id>: Asset identifiers, for example, ENG001, rd1286
  • <sensitive>: Sensitive information specific to organisations, including proprietary systems, third-party contractors, and names of personnel.
  • <num>: Numerical entities, such as 8, 7001223
  • <date>: Representations of dates, either in numerical form like 10/10/2023 or phrase form such as 8th Dec

Dataset Instances

The dataset adopts a standard normalisation format similar to that used in the WNUT shared tasks, with each text resembling the format seen in CoNLL03: tokens are separated by newlines, and each token is accompanied by its normalised or masked counterpart, separated by a tab.

Examples

Exhaust	exhaust
Fan	fan
#6	number <num>
Tripping	tripping
c/b	circuit breaker

HF338	<id>
INVESTAGATE	investigate
24V	<num> V
FAULT	fault

Citation

Please cite the following paper if you use this dataset in your research:

@inproceedings{bikaun-etal-2024-maintnorm,
    title = "{M}aint{N}orm: A corpus and benchmark model for lexical normalisation and masking of industrial maintenance short text",
    author = "Bikaun, Tyler  and
      Hodkiewicz, Melinda  and
      Liu, Wei",
    editor = {van der Goot, Rob  and
      Bak, JinYeong  and
      M{\"u}ller-Eberstein, Max  and
      Xu, Wei  and
      Ritter, Alan  and
      Baldwin, Tim},
    booktitle = "Proceedings of the Ninth Workshop on Noisy and User-generated Text (W-NUT 2024)",
    month = mar,
    year = "2024",
    address = "San {\.G}iljan, Malta",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.wnut-1.7",
    pages = "68--78",
}
Downloads last month
0
Edit dataset card