metadata
configs:
- config_name: measeval
data_files:
- split: train
path: measeval_paragraph_level_no_spans_train.json
- split: val
path: measeval_paragraph_level_no_spans_val.json
- split: test
path: measeval_paragraph_level_no_spans_test.json
- config_name: bm
data_files:
- split: train
path: bm_paragraph_level_no_spans_train.json
- split: val
path: bm_paragraph_level_no_spans_val.json
- split: test
path: bm_paragraph_level_no_spans_test.json
- config_name: msp
data_files:
- split: train
path: msp_paragraph_level_no_spans_train.json
- split: val
path: msp_paragraph_level_no_spans_val.json
- split: test
path: msp_paragraph_level_no_spans_test.json
- config_name: all
data_files:
- split: train
path:
- measeval_paragraph_level_no_spans_train.json
- bm_paragraph_level_no_spans_train.json
- msp_paragraph_level_no_spans_train.json
- split: val
path:
- measeval_paragraph_level_no_spans_val.json
- bm_paragraph_level_no_spans_val.json
- msp_paragraph_level_no_spans_val.json
- split: test
path:
- measeval_paragraph_level_no_spans_test.json
- bm_paragraph_level_no_spans_test.json
- msp_paragraph_level_no_spans_test.json
task_categories:
- token-classification
language:
- en
tags:
- chemistry
- biology
size_categories:
- n<1K
A Multi-Domain Corpus for Measurement Extraction (Seq2Seq variant)
A detailed description of corpus creation can be found here.
This dataset contains the training and validation and test data for each of the three datasets measeval
, bm
, and msp
. The measeval
, and msp
datasets were adapted from the MeasEval (Harper et al., 2021) and the Material Synthesis Procedual (Mysore et al., 2019) corpus respectively.
This repository aggregates extraction to paragraph-level for msp and measeval. Labels are given in json-format as preparation for seq2seq training.
How to load
from datasets import load_dataset
# Only train, all domains
train_dataset = load_dataset("liy140/multidomain-measextract-corpus", "all", split="train")
# All measeval data
measeval_dataset = load_dataset("liy140/multidomain-measextract-corpus", "measeval", split=["train", "val", "test"])
Create Seq2Seq samples
One standard instruction is given, such that such a prompt can be generated by merging text and extraction columns:
### Instruction
You are an expert at extracting quantity, units and their related context from text.
Given a paragraph below identify each quantity and its related unit and related context, i.e. the measured entity and measured property if they exist.
### Paragraph
The H/H+ transition in the MC09 model occurs near 1.4Rp. If we replace the gray approximation with the full solar spectrum in this model, the H/H+ transition moves higher to 2–3Rp. This is because photons with different energies penetrate to different depths in the atmosphere, extending the heating profile in altitude around the heating peak. This is why the temperature at the 30 nbar level in the C2 model is 3800 K and not 1000 K. In order to test the effect of higher temperatures in the lower thermosphere, we extended the MC09 model to p0 = 1 μbar (with T0 = 1300 K) and again used the full solar spectrum for heating and ionization. With these conditions, the H/H+ transition moves up to 3.4Rp, in agreement with the C2 model. We conclude that the unrealistic boundary conditions and the gray approximation adopted by Murray-Clay et al. (2009) and Guo (2011) lead to an underestimated overall density of H and an overestimated ion fraction. Thus their density profiles yield a H Lyman α transit depth of the order of 2–3% i.e., not significantly higher than the visible transit depth.
### Extractions
[
{
"docId": "S0019103513005058-3154",
"measured_entity": "Soluble sulfate",
"measured_property": null,
"quantity": "1.3 \u00b1 0.5 wt.%",
"unit": "wt.%"
},
{
"docId": "S0019103513005058-3154",
"measured_entity": "soil",
"measured_property": "perchlorate (ClO4-)",
"quantity": "\u223c0.5 wt.%",
"unit": "wt.%"
},
{
"docId": "S0019103513005058-3154",
"measured_entity": "perchlorate-sensitive electrode",
"measured_property": "sensitive to nitrate",
"quantity": "1000 times",
"unit": "times"
},
{
"docId": "S0019103513005058-3154",
"measured_entity": "Viking 1 and Viking 2 landing sites",
"measured_property": "perchlorate",
"quantity": "\u2a7d1.6%",
"unit": "%"
},
{
"docId": "S0019103513005058-3154",
"measured_entity": "martian meteorite EETA79001",
"measured_property": "Native perchlorate",
"quantity": "<1 ppm by mass",
"unit": "ppm by mass"
}
]
Citation
@inproceedings{li-etal-2023-multi-source,
title = "Multi-Source (Pre-)Training for Cross-Domain Measurement, Unit and Context Extraction",
author = "Li, Yueling and
Martschat, Sebastian and
Ponzetto, Simone Paolo",
booktitle = "The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.bionlp-1.1",
pages = "1--25",
abstract = "We present a cross-domain approach for automated measurement and context extraction based on pre-trained language models. We construct a multi-source, multi-domain corpus and train an end-to-end extraction pipeline. We then apply multi-source task-adaptive pre-training and fine-tuning to benchmark the cross-domain generalization capability of our model. Further, we conceptualize and apply a task-specific error analysis and derive insights for future work. Our results suggest that multi-source training leads to the best overall results, while single-source training yields the best results for the respective individual domain. While our setup is successful at extracting quantity values and units, more research is needed to improve the extraction of contextual entities. We make the cross-domain corpus used in this work available online.",
}