--- license: cc-by-4.0 task_categories: - question-answering language: - en tags: - biology - scientific-papers pretty_name: Contextualizing Scientific Claims size_categories: - 1K [!NOTE] > Many figures are compound figures, with labeled subfigures (e.g., FIG 1A, FIG 1B). Sometimes the relevant grounding figure is a subfigure, and sometimes it is the whole (parent) figure. We therefore provide figure parses for each parent figure as well as its subfigures (e.g., we provide both FIG 1 and FIG 1A, FIG 1B). And, accordingly, for NDCG scoring, predicted images that are parent/sub-figures of a gold figure label receive a relevance score of 0.5. ### Training and dev data description There are currently 474 total scientific claims across the four datsets, in the following breakdown | Dataset | N | | ---------------------------- | --- | | akamatsulab | 213 | | BIOL403 | 60 | | dg-social-media-polarization | 78 | | megacoglab | 123 | 393 were present in the initial release (in `task1-train-dev.json`), and 81 new claims were added on April 26. The full training dataset of 474 claims are in `task1-train-dev-2024-04-25.json`. ### Test data description The test set consists of 111 total scientific claims across two datasets, in the following breakdown | Dataset | N | | ---------------------------- | --- | | akamatsulab | 51 | | megacoglab | 60 | ## Task 2: Grounding Context Identification ### Task description Given a scientific claim and a relevant research paper, identify all grounding context from the paper discussing methodological details of the experiment that resulted in this claim. For the purposes of this task, grounding context is restricted to quotesa from the paper. These grounding context quotes are typically dispersed throughout the full-text, often far from where the supporting evidence is presented. For maximal coverage for this task, search for text snippets that cover the following key aspects of the empirical methods of the claim: 1. **What** observable measures/data were collected 2. **How** (with what methods, analyses, etc.) from 3. **Who**(m) (which participants, what dataset, what population, etc.) _NOTE_: we will not be scoring the snippets separately by context "category" (e.g. who/how/what): we provide them here to clarify the requirements of the task. Here is an example claim with a quotes as empirical methods context. ``` { "id": "megacoglab-W3sdOb60i", "claim": "US patents filed by inventors who were new to the patent's field tended to be more novel", "citekey": "artsParadiseNoveltyLoss2018a", "dataset": "megacoglab", "context": [ "To assess patent novelty, we calculate new combinations (ln) as the logarithmic transformation of one plus the number of pairwise subclass combinations of a patent that appear for the first time in the US. patent database (Fleming et al. 2007, Jung and Jeongsik 2016). To do so, each pairwise combination of subclasses is compared with all pairwise combinations of all prior U.S. patents. (p. 5)", "we begin with the full population of inventors and collect all patents assigned to \ufb01rms but, by design, must restrict the sample to inventors who have at least two patents assigned to the same \ufb01rm. The advantage of this panel setup is that we can use inventor\u2013firm fixed effect models to control for unobserved heterogeneity among inventors and firms, which arguably have a strong effect on the novelty and value of creative output. This approach basically uses repeated patents of the same inventor within the same firm to identify whether the inventor creates more or less novel\u2014and more or less valuable\u2014patents when any subsequent patent is categorized in a new \ufb01eld. The sample includes 2,705,431 patent\u2013inventor observations assigned to 396,336 unique inventors and 46,880 unique firms, accounting for 473,419 unique inventor\u2013firm pairs. (p. 5)", "For each inventor-patent observation, we retrieve the three-digit technology classes of all prior patents of the focal inventor and identify whether there is any overlap between the three-digit technology classes of the focal patent and the three-digit technology classes linked o all prior patents of the same inventor. We rely on all classes assigned to a patent rather than just the primary class. Exploring new fields is a binary indicator that equals one in the absence of any overlapping class between all prior patents and the focal patent. (p. 6)", "we can use inventor\u2013\ufb01rm \ufb01xed effect models to control for unobserved heterogeneity among inventors and \ufb01rms, which arguably have a strong effect on the novelty and value of creative output (p. 5)", "we select the full population of inventors with U.S. patents assigned to \ufb01rms for 1975\u20132002 (p. 3)" ] }, ``` In this example, the quotes fall into the following aspects of empirical methods: **What**: > "To assess patent novelty, we calculate new combinations (ln) as the logarithmic transformation of one plus the number of pairwise subclass combinations of a patent that appear for the first time in the US. patent database (Fleming et al. 2007, Jung and Jeongsik 2016). To do so, each pairwise combination of subclasses is compared with all pairwise combinations of all prior U.S. patents. (p. 5)" > > "For each inventor-patent observation, we retrieve the three-digit technology classes of all prior patents of the focal inventor and identify whether there is any overlap between the three-digit technology classes of the focal patent and the three-digit technology classes linked o all prior patents of the same inventor. We rely on all classes assigned to a patent rather than just the primary class. Exploring new fields is a binary indicator that equals one in the absence of any overlapping class between all prior patents and the focal patent. (p. 6)" **Who**: > "we select the full population of inventors with U.S. patents assigned to \ufb01rms for 1975\u20132002 (p. 3)" **How**: > "we begin with the full population of inventors and collect all patents assigned to \ufb01rms but, by design, must restrict the sample to inventors who have at least two patents assigned to the same \ufb01rm. The advantage of this panel setup is that we can use inventor\u2013firm fixed effect models to control for unobserved heterogeneity among inventors and firms, which arguably have a strong effect on the novelty and value of creative output. This approach basically uses repeated patents of the same inventor within the same firm to identify whether the inventor creates more or less novel\u2014and more or less valuable\u2014patents when any subsequent patent is categorized in a new \ufb01eld. The sample includes 2,705,431 patent\u2013inventor observations assigned to 396,336 unique inventors and 46,880 unique firms, accounting for 473,419 unique inventor\u2013firm pairs. (p. 5)" > "we can use inventor\u2013\ufb01rm \ufb01xed effect models to control for unobserved heterogeneity among inventors and \ufb01rms, which arguably have a strong effect on the novelty and value of creative output (p. 5)" Scoring will be done using ROUGE and BERT score similarity to the gold standard quotes. See `eval2.py` in `eval/` for more details. ### Example test data Task 2 is a "test-only" task. In liueu of training data, we are releasing a small (N=42) set of examples, which can be used to get an idea for the task, with the following breakdown across the `akamatsulab` and `megacoglab` datasets: | Dataset | N | | ---------------------------- | --- | | akamatsulab | 28 | | megacoglab | 14 | ### Test data description The test set consists of 109 total scientific claims across two datasets, in the following breakdown | Dataset | N | | ---------------------------- | --- | | akamatsulab | 49 | | megacoglab | 60 | ## Evaluation and Submission You can see how we will evaluate submissions --- both in terms of scoring, and prediction file format and structure --- for Task 1 and 2 by running the appropriate eval script for your predictions. Submissions will be evaluated on the `eval.ai` platform at this challenge URL: https://eval.ai/web/challenges/challenge-page/2306/overview The challenge is currently not yet live (pending some technical issues, should be up in the next few days), but submissions will be accepted in the same format as expected by the eval scripts. ### Task 1 Predictions for this task should be in a `.csv` file with two columns: 1. The claim id (e.g., `megacoglab-W3sdOb60i`) 2. The predicted figure/table ranking, which will be comma-separated string of figure/table names, from highest to lowest ranking Example: ``` claimid,predictions megacoglab-W3sdOb60i,"FIG 1, TAB 1" ``` > [!WARNING] > The script expects a header row, so make sure your csv has a header row, otherwise the first row of your predictions will be skipped. The names in the header row do not matter, because but we don't use the header names to parse the predictions data. To get scores for your predictions, inside the `eval/` subdirectory, run `task1_eval.py` as follows: ```.... python task1_eval.py --pred_file .csv --gold_file ../task1-train-dev.json --parse_folder ../figures-tables ``` You can optionally add `--debug True` if you want to dump scores for individual predicions for debugging/analysis. ### Task 2 Predictions for this task should be in a `.json` file (similar in structure to the training-dev file) where each entry has the following fields: 1. `id` (id of the claim) 2. `context` (list of predicted snippets: order is not important) Before running the eval script for task 2, you will need to first install required dependencies of`bert-score` and `rouge-score`. `bert-score`: https://github.com/Tiiiger/bert_score ``` pip install bert-score ``` `rouge-score`: ``` pip install rouge-score ``` Then run the `task2_eval.py` script in the following format: ``` python task1_eval.py --pred_file .json --gold_file ../task1-train-dev.json --parse_folder ../figures-tables ```