The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

Dataset Card for MuLMS

Example annotation in the Multi-Layer Materials Science Corpus (image source: MuLMS: A Multi-Layer Annotated Text Corpus for Information Extraction in the Materials Science Domain)

Dataset Description

The Multi-Layer Materials Science corpus (MuLMS) consists of 50 documents (licensed CC BY) from the materials science domain, spanning across the following 7 subareas: "Electrolysis", "Graphene", "Polymer Electrolyte Fuel Cell (PEMFC)", "Solid Oxide Fuel Cell (SOFC)", "Polymers", "Semiconductors" and "Steel". It was exhaustively annotated by domain experts. There are annotations on sentence-level and token-level for the following NLP tasks:

  • Measurement Frames: Measurement annotations are treated in a frame-like fashion, using the span type MEASUREMENT to mark the triggers (e.g., was measured, is plotted) that introduce the Measurement frame to the discourse. Deciding whether a sentence contains a measurement trigger is treated as a sentence-level task, determining the span that triggers the measurement frame is treated as named entity recognition.
  • Named Entities: There are 12 token-level named entities (+ Measurement trigger) available in MuLMS. Named entities can span across multiple tokens.
  • Relations: MuLMS provides relations between pairs of entities. There are two types of relations: measurement-related relations and further relations. The first type always starts at Measurement trigger spans, the scond type does not start at a specific Measurement annotation.
  • Argumentative Zones: Each sentence in MuLMS is assigned a rhetorical function in the discourse (e.g., Background or Experiment_Preparation). There are 12 argumentative zones in MuLMS, which leads to a sentence-level classification task.

You can find all experiment code files and further information in the MuLMS-AZ Repo and MuLMS Repo. For dataset statistics, please refer to both papers listed below. There you can also find detailed explanation of all parts of MuLMS in very detail.

Dataset Details

MuLMS provides all annotated files in UIMA CAS XMI format that can be used with annotation tools that can read these files such as INCEpTION.

Important: To use the dataset reader, please install the UIMA CAS Python reader puima using the following command: pip install git+https://github.com/annefried/puima.git.

Dataset Sources

Uses

Direct Use

This dataset aims at information extraction from materials science documents. It enables the training of (neural) classifiers that can be used for downstream tasks such as NER and relation extraction. Please refer to both repos linked above for training BERT-like models on all NLP tasks provided in MuLMS.

Dataset Structure

MuLMS offers two configs: MuLMS_Corpus, which loads the entire MuLMS dataset, and NER_Dependecies, which loads only Named Entities in CONLL format in order to train models in the NER_as_dependency_parsing setting.

MuLMS is divided into three split: train, validation, and test. Furthermore, train is divided into five sub-splits, namely tune1,...,tune5. This allows for model training on four splits, early stopping on the fivth and remaining split, model picking on validation and evaluation only once on test. HuggingFace datasets do not support these sub-splits, hence they must be loaded as train and post-processed and filtered afterward in a custom dataset loader.

Dataset Config MuLMS_Corpus

  • doc_id: ID of the source document that can be used to lookup the metadata of the paper in MuLMS_Corpus_Metadata.csv.
  • sentence: Each instance in the dataset corresponds to one sentence extracted from scientic papers. These sentences are listed in this field.
  • tokens: Pre-tokenized sentences. Each instance is a list of tokens.
  • begin_offset: Offset of the beginning of each sentence within the full text of the document.
  • end_offset: Offset of the end of each sentence within the full text of the document.
  • AZ_labels: The argumentative zone (= rhetorical function) of each sentence in the discourse of a materials science publication.
  • Measurement_label: Labels each sentence whether it contains a measurement description, i.e., measurement frame evoking trigger word, or not.
  • NER_labels: Contains lists with named entities (NEs) per instance. Every named entity uses one of n indices in these lists, i.e., every 0-th element belong to each other, ...
    • text: List of tokens that are contained in the current sentence instance.
    • id: Unique ID for each named entity
    • value: The named entity class
    • begin: Character offsets of the begin tokens of each NE
    • end: Character offsets of the end tokens of each NE
    • tokenIndices: Token index in the list of tokens
  • NER_labels_BILOU: BILOU tag sequence per token in the sentence (B = begin, I = inside, L = last, O = none, U = unit).
  • relations: Lists of relations between pair-wise entities. As with the named entities, each relation corresponds to the same index in all three lists (ne_id_gov, ne_id_dep, label)
    • ne_id_gov: List of NE entity IDs that act as head of the relation
    • ne_id_dep: List of NE entity IDs that are the tail of the relation
    • label: Relation label between both entities
  • docFileName: Name of the source document in the corpus
  • data_split: Indicates the split which a document belongs to (tune1/2/3/4/5, dev, test)
  • category: One of 7 materials science sub-domains in MuLMS (SOFC, graphene, electrolysis, PEMFC, )

Dataset Config NER_Dependencies

Each instance in this config refers to one token and carries a copy of the entire sentence, i.e., for n tokens in a sentence, the text of the sentence is given n times.

  • index: Unique instance ID for each token.
  • ID: Sentence ID. As opposed to the other config, the sentences here are not sorted by document and provided in their full form for every token they belong to.
  • Sentence: Sentence string
  • Token_ID: Unique ID for each token within each sentence. ID is resetted for each new sentence.
  • Token: Token string
  • NE_Dependencies: The named entity tag of form k:LABEL where k refers to the ID of the begin token and LABEL to the named entity. The entity ends at the token holding this
  • label.
  • data_split: Indicates the split which a document belongs to (tune1/2/3/4/5, dev, test)

Labels

For the different layers, the following labels are available:

  • Measurement Frames:
    • Measurement
    • Qual_Measurement
  • Named Entities:
    • MAT
    • NUM
    • VALUE
    • UNIT
    • PROPERTY
    • FORM
    • MEASUREMENT (measurement frame-evoking trigger)
    • CITE
    • SAMPLE
    • TECHNIQUE
    • DEV
    • RANGE
    • INSTRUMENT
  • Relations:
    • hasForm
    • measuresProperty
    • usedAs
    • propertyValue
    • conditionProperty
    • conditionSample
    • conditionPropertyValue
    • usesTechnique
    • measuresPropertyValue
    • usedTogether
    • conditionEnv
    • usedIn
    • conditionInstrument
    • takenFrom
    • dopedBy
  • Argumentative Zones:
    • Motivation
    • Background
      • PriorWork
    • Experiment
      • Preparation
      • Characterization
    • Explanation
    • Results
    • Conclusion
    • Heading
    • Caption
    • Metadata

Dataset Creation

Curation Rationale

Keeping track of all relevant recent publications and experimental results for a research area is a challenging task. MuLMS addresses this problem by providing a large set of annotated documents that allow for training models that can be used for automated information extraction and answering search queries in materials science documents.

Source Data

You can find all the details for every document in this corpus in MuLMS_Corpus_Metadata.csv.

Who are the source data producers?

You can find all the authors for every document in this corpus in MuLMS_Corpus_Metadata.csv.

Annotation process

The annotation process included guideline design in dedicated discussion sessions. Afterward, the text files were annotated using INCEpTION.

Who are the annotators?

The annotators worked collaboratively to annotate the dataset in the best possible way. All people in this project either have background in materials science or computer science. This synergy enables to incorporate both views, the materials scientist view that has a deep knowledge about the topics themselves as well as the CS view that always looks at processing text data automatically in a structured fashion.

Personal and Sensitive Information

This dataset does not contain any personal, sensitive or private data. MuLMS builds upon publicly available scientific publications and all authors are credited accordingly.

Citation

If you use our software or dataset in your scientific work, please cite both papers:

BibTeX:

@misc{schrader2023mulms,
      title={MuLMS: A Multi-Layer Annotated Text Corpus for Information Extraction in the Materials Science Domain}, 
      author={Timo Pierre Schrader and Matteo Finco and Stefan Grünewald and Felix Hildebrand and Annemarie Friedrich},
      year={2023},
      eprint={2310.15569},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

@inproceedings{schrader-etal-2023-mulms,
    title = "{M}u{LMS}-{AZ}: An Argumentative Zoning Dataset for the Materials Science Domain",
    author = {Schrader, Timo  and
      B{\"u}rkle, Teresa  and
      Henning, Sophie  and
      Tan, Sherry  and
      Finco, Matteo  and
      Gr{\"u}newald, Stefan  and
      Indrikova, Maira  and
      Hildebrand, Felix  and
      Friedrich, Annemarie},
    booktitle = "Proceedings of the 4th Workshop on Computational Approaches to Discourse (CODI 2023)",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.codi-1.1",
    doi = "10.18653/v1/2023.codi-1.1",
    pages = "1--15",
}

Changes

Changes to the source code from the original repo are listed in the CHANGELOG file.

Copyright

Experiment resources related to the MuLMS corpus.
Copyright (c) 2023 Robert Bosch GmbH

This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published
by the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.

License

This software is open-sourced under the AGPL-3.0 license. See the LICENSE_CODE file for details. The MuLMS corpus is released under the CC BY-SA 4.0 license. See the LICENSE_CORPUS file for details.

Dataset Card Authors

  • Timo Pierre Schrader (Bosch Center for AI, University of Augsburg)
  • Matteo Finco (Bosch Research)
  • Stefan Grünewald (Bosch Center for AI, University of Stuttgart)
  • Felix Hildebrand (Bosch Research)
  • Annemarie Friedrich (University of Augsburg)

Dataset Card Contact

For all questions, please contact Timo Schrader.

Downloads last month
3
Edit dataset card