Geoffrey-Wang commited on
Commit
4fea62c
·
verified ·
1 Parent(s): b6a6a9e

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - text-classification
5
+ - fact-checking
6
+ language:
7
+ - en
8
+ tags:
9
+ - citation-verification
10
+ - hallucination-detection
11
+ - scientific-integrity
12
+ - bayesian-scoring
13
+ - reference-checking
14
+ size_categories:
15
+ - 10K<n<100K
16
+ ---
17
+
18
+ # IntegriRef-Bench
19
+
20
+ Benchmark dataset for multi-level reference integrity verification, accompanying the IntegriRef framework.
21
+
22
+ ## Dataset Splits
23
+
24
+ | Split | Rows | Description |
25
+ |-------|------|-------------|
26
+ | `reference_verification` | 1,926 | Golden benchmark + crawled verification cases (hallucinated, real, chimera, retracted) |
27
+ | `signal_unit_tests` | 403 | Per-signal unit tests for 14 Bayesian signal types |
28
+ | `l1_intent` | 20 | Citation intent classification test pairs |
29
+ | `l2_nli` | 20 | Claim-evidence NLI alignment test pairs |
30
+ | `graph_anomaly` | 3,030 | Citation graph anomaly cases (rings, temporal, orphan clusters) |
31
+ | `retracted_papers` | 6,391 | Retracted papers from Crossref + PubMed with real controls |
32
+ | **Total** | **11,790** | |
33
+
34
+ ## Usage
35
+
36
+ ```python
37
+ from datasets import load_dataset
38
+
39
+ ds = load_dataset("Geoffrey-Wang/IntegriRef-Bench")
40
+
41
+ # Or load a specific split
42
+ ref_ver = load_dataset("Geoffrey-Wang/IntegriRef-Bench", data_files="reference_verification.jsonl")
43
+ ```
44
+
45
+ ## Sources
46
+
47
+ - **Retracted papers**: Crossref `update-to` retraction markers + PubMed retraction notices
48
+ - **Hallucinated references**: Programmatically generated with verified non-existence
49
+ - **Chimera references**: Real DOIs paired with swapped metadata
50
+ - **Graph anomaly cases**: Documented citation cartels (Brazilian 2009-2013, Ji-Huan He, IOP 2024)
51
+ - **Temporal anomalies**: OpenAlex citation graph analysis
52
+ - **Statistical cases**: GRIM test + statcheck from PMC Open Access full texts
53
+
54
+ ## Citation
55
+
56
+ ```bibtex
57
+ @inproceedings{integriref2026,
58
+ title={IntegriRef: A Five-Layer Bayesian Framework for Cross-Domain Reference Integrity Verification},
59
+ author={Anonymous},
60
+ booktitle={KnowFM Workshop at ACL},
61
+ year={2026}
62
+ }
63
+ ```
64
+
65
+ ## License
66
+
67
+ MIT
graph_anomaly.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
l1_intent.jsonl ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"id": "l1_pair_001", "split": "l1_intent", "category": "supporting", "signal": "none", "citing_sentence": "As demonstrated by Vaswani et al. [1], attention mechanisms can replace recurrence entirely and yield superior performance on sequence-to-sequence tasks when sufficient computational resources are available.", "citation_key": "1", "cited_doi": "10.48550/arXiv.1706.03762", "cited_title": "Attention Is All You Need", "cited_abstract": "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the...", "expected_intent": "supporting", "expected_misrepresents": false, "note": "Strong SUPPORTING cue 'as demonstrated by' immediately before citation. Abstract confirms attention-only architecture."}
2
+ {"id": "l1_pair_002", "split": "l1_intent", "category": "supporting", "signal": "none", "citing_sentence": "Our experimental results are consistent with the findings of Rajpurkar et al. [2], confirming that reading comprehension models trained on large annotated datasets can match human performance on extractive question answering.", "citation_key": "2", "cited_doi": "10.18653/v1/D16-1264", "cited_title": "SQuAD: 100,000+ Questions for Machine Comprehension of Text", "cited_abstract": "We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles. We show that logistic regression and neural network models trained on SQuAD approach human-level performance, with...", "expected_intent": "supporting", "expected_misrepresents": false, "note": "SUPPORTING cue phrase 'consistent with the findings of'. Both 'consistent with' and 'findings of' are in _SUPPORTING_CUES."}
3
+ {"id": "l1_pair_003", "split": "l1_intent", "category": "supporting", "signal": "none", "citing_sentence": "Following the approach of Devlin et al. [3], we pre-train a masked language model on domain-specific corpora before fine-tuning on downstream clinical NLP tasks.", "citation_key": "3", "cited_doi": "10.18653/v1/N19-1423", "cited_title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "cited_abstract": "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both...", "expected_intent": "supporting", "expected_misrepresents": false, "note": "SUPPORTING/USING via 'following the approach of'. Intent classifier merges USING → SUPPORTING in 3-class output."}
4
+ {"id": "l1_pair_004", "split": "l1_intent", "category": "supporting", "signal": "none", "citing_sentence": "This finding is in line with prior evidence from Chen et al. [4], who showed that dropout regularization reduces overfitting in deep neural networks even without explicit weight decay.", "citation_key": "4", "cited_doi": "10.5555/2627435.2670313", "cited_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "cited_abstract": "Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural...", "expected_intent": "supporting", "expected_misrepresents": false, "note": "SUPPORTING cue 'in line with prior evidence from'. Confirms abstract's finding about overfitting."}
5
+ {"id": "l1_pair_005", "split": "l1_intent", "category": "supporting", "signal": "none", "citing_sentence": "The model achieves an accuracy of 94.2% on the CIFAR-10 benchmark, consistent with results reported by He et al. [5] for deep residual networks of comparable depth.", "citation_key": "5", "cited_doi": "10.1109/CVPR.2016.90", "cited_title": "Deep Residual Learning for Image Recognition", "cited_abstract": "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of...", "expected_intent": "supporting", "expected_misrepresents": false, "note": "SUPPORTING via 'consistent with results reported by'. Performance comparison context."}
6
+ {"id": "l1_pair_006", "split": "l1_intent", "category": "supporting", "signal": "none", "citing_sentence": "As shown by Hochreiter and Schmidhuber [6], gating mechanisms in long short-term memory networks effectively address the vanishing gradient problem, which our ablation study further confirms under longer sequence lengths.", "citation_key": "6", "cited_doi": "10.1162/neco.1997.9.8.1735", "cited_title": "Long Short-Term Memory", "cited_abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's 1991 analysis of this problem, then address it by introducing a novel, efficient, gradient-based...", "expected_intent": "supporting", "expected_misrepresents": false, "note": "SUPPORTING cue 'as shown by'. The claiming sentence correctly reflects the cited paper's contribution on vanishing gradients."}
7
+ {"id": "l1_pair_007", "split": "l1_intent", "category": "contrasting", "signal": "none", "citing_sentence": "In contrast to the claims of Bengio et al. [7], our experiments show that curriculum learning does not consistently improve convergence when training transformers on text generation tasks.", "citation_key": "7", "cited_doi": "10.1145/1553374.1553380", "cited_title": "Curriculum learning", "cited_abstract": "Humans and animals learn much better when the examples are not randomly presented but organized in a meaningful order which illustrates gradually more concepts, and gradually more complex ones. Here, we formalize such training strategies in the context of machine learning, and call them curriculum...", "expected_intent": "contrasting", "expected_misrepresents": false, "note": "Strong CONTRASTING cue 'in contrast to'. The citing paper disputes the scope of the cited work's benefit."}
8
+ {"id": "l1_pair_008", "split": "l1_intent", "category": "contrasting", "signal": "none", "citing_sentence": "Unlike the model of Peters et al. [8], which requires task-specific fine-tuning to be competitive, our approach achieves strong zero-shot transfer across all evaluated domains.", "citation_key": "8", "cited_doi": "10.18653/v1/N18-1202", "cited_title": "Deep contextualized word representations", "cited_abstract": "We introduce a new type of deep contextualized word representation that models both complex characteristics of word use (e.g., syntax and semantics), and how these uses vary across linguistic contexts. Our word vectors are learned functions of the internal states of a deep bidirectional language...", "expected_intent": "contrasting", "expected_misrepresents": false, "note": "CONTRASTING via 'unlike'. Contrasts zero-shot capability vs. task-specific fine-tuning requirement."}
9
+ {"id": "l1_pair_009", "split": "l1_intent", "category": "contrasting", "signal": "none", "citing_sentence": "The approach proposed by Goodfellow et al. [9] fails to account for training instability arising from mode collapse, a well-documented limitation of vanilla GAN formulations.", "citation_key": "9", "cited_doi": "10.48550/arXiv.1406.2661", "cited_title": "Generative Adversarial Nets", "cited_abstract": "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather...", "expected_intent": "contrasting", "expected_misrepresents": false, "note": "CONTRASTING via 'fails to account for' + 'limitation of'. Both are strong _CONTRASTING_CUES patterns."}
10
+ {"id": "l1_pair_010", "split": "l1_intent", "category": "contrasting", "signal": "none", "citing_sentence": "While batch normalization as described in Ioffe and Szegedy [10] improves training stability, it suffers from significant performance degradation under small batch sizes, contrary to the authors' original claims.", "citation_key": "10", "cited_doi": "10.5555/3045118.3045167", "cited_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift", "cited_abstract": "Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs.", "expected_intent": "contrasting", "expected_misrepresents": false, "note": "CONTRASTING via 'suffers from'. 'While' alone is a weak marker but is combined with 'suffers from', a strong contrasting cue."}
11
+ {"id": "l1_pair_011", "split": "l1_intent", "category": "contrasting", "signal": "none", "citing_sentence": "The findings of Obermeyer et al. [11] challenge the assumption that algorithmic risk scores are race-neutral; their analysis reveals that a widely deployed commercial algorithm exhibited significant racial bias in healthcare resource allocation.", "citation_key": "11", "cited_doi": "10.1126/science.aax2342", "cited_title": "Dissecting racial bias in an algorithm used to manage the health of populations", "cited_abstract": "Health systems rely on commercial prediction algorithms to identify and help patients with complex health needs. We show that a widely used algorithm, typical of this class of algorithms, exhibits significant racial bias: at a given risk score, Black patients are considerably sicker than White...", "expected_intent": "contrasting", "expected_misrepresents": false, "note": "CONTRASTING via 'challenge the assumption'. The cited paper itself presents contrasting evidence against the status quo."}
12
+ {"id": "l1_pair_012", "split": "l1_intent", "category": "mentioning", "signal": "none", "citing_sentence": "Several approaches to neural machine translation have been explored in recent years [12], including encoder-decoder architectures with attention and purely convolutional models.", "citation_key": "12", "cited_doi": "10.18653/v1/D14-1179", "cited_title": "Neural Machine Translation by Jointly Learning to Align and Translate", "cited_abstract": "Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance.", "expected_intent": "mentioning", "expected_misrepresents": false, "note": "Pure background MENTIONING. No strong cue phrases. Enumerates prior work neutrally."}
13
+ {"id": "l1_pair_013", "split": "l1_intent", "category": "mentioning", "signal": "none", "citing_sentence": "Reinforcement learning from human feedback has been studied extensively [13], and remains an active research area with applications in dialogue systems and code generation.", "citation_key": "13", "cited_doi": "10.48550/arXiv.2203.02155", "cited_title": "Training language models to follow instructions with human feedback", "cited_abstract": "Making language models bigger does not inherently make them better at following a user's intent. Large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user.", "expected_intent": "mentioning", "expected_misrepresents": false, "note": "Background mention. 'Has been studied extensively' is a stock neutral academic phrase with no strong cue pattern."}
14
+ {"id": "l1_pair_014", "split": "l1_intent", "category": "mentioning", "signal": "none", "citing_sentence": "Bayesian optimization methods [14] represent one family of approaches to hyperparameter search, though they are not the focus of the current paper.", "citation_key": "14", "cited_doi": "10.5555/2999134.2999257", "cited_title": "Practical Bayesian Optimization of Machine Learning Algorithms", "cited_abstract": "Machine learning algorithms frequently require careful tuning of model hyperparameters, regularization terms, and optimization parameters. Unfortunately, this tuning is often a dark art requiring expert experience, rules of thumb, or sometimes brute-force search.", "expected_intent": "mentioning", "expected_misrepresents": false, "note": "Peripheral/scope-delimiting MENTIONING. The phrase 'not the focus of the current paper' is explicitly neutral."}
15
+ {"id": "l1_pair_015", "split": "l1_intent", "category": "mentioning", "signal": "none", "citing_sentence": "Graph neural networks [15] have been applied to a range of problems including drug discovery, social network analysis, and combinatorial optimization.", "citation_key": "15", "cited_doi": "10.1109/TNN.2008.2005605", "cited_title": "The Graph Neural Network Model", "cited_abstract": "Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs.", "expected_intent": "mentioning", "expected_misrepresents": false, "note": "Definitional/scope-setting MENTIONING. Lists applications without endorsing or disputing the cited method."}
16
+ {"id": "l1_pair_016", "split": "l1_intent", "category": "misrepresents", "signal": "citation_misrepresents_source", "citing_sentence": "As demonstrated by Knijnenburg et al. [16], users reliably prefer personalized recommendations over non-personalized ones across all demographic groups and interface types tested.", "citation_key": "16", "cited_doi": "10.1145/2043932.2043956", "cited_title": "Explaining the User Experience of Recommender Systems", "cited_abstract": "We study the effect of system accuracy on user experience in recommender systems. We find that accuracy is an important but not the only determinant of user satisfaction. Perceived privacy risk, algorithm transparency, and individual differences significantly moderate the relationship between...", "expected_intent": "supporting", "expected_misrepresents": true, "note": "MISREPRESENTS: The citing sentence says users 'reliably prefer' personalized recs 'across all demographic groups'. The actual paper found that individual differences and privacy concerns moderate this relationship — it is not universal. The cue phrase 'as demonstrated by' triggers SUPPORTING but the claim overgeneralizes a conditional finding."}
17
+ {"id": "l1_pair_017", "split": "l1_intent", "category": "misrepresents", "signal": "citation_misrepresents_source", "citing_sentence": "Power et al. [17] showed that neural networks can generalize compositionally in a manner comparable to humans, validating the use of deep learning as a model of systematic human cognition.", "citation_key": "17", "cited_doi": "10.1162/tacl_a_00334", "cited_title": "SCAN: Learning to Compose Commands", "cited_abstract": "We introduce SCAN, a set of simple language navigation tasks that test compositional learning. We show that standard sequence-to-sequence and convolutional models fail to achieve human-like systematic generalization on SCAN, with performance dropping dramatically on compositionally novel test...", "expected_intent": "supporting", "expected_misrepresents": true, "note": "MISREPRESENTS: The citing sentence inverts the conclusion. The paper actually found that standard NNs fail to generalize compositionally like humans. The citing sentence cites it as validation of deep learning's human-like compositionality — the exact opposite of the paper's finding."}
18
+ {"id": "l1_pair_018", "split": "l1_intent", "category": "misrepresents", "signal": "citation_misrepresents_source", "citing_sentence": "The large-scale study of Ioannidis [18] confirmed that most published findings in biomedicine are reliable and reproducible when adequate sample sizes are used.", "citation_key": "18", "cited_doi": "10.1371/journal.pmed.0020124", "cited_title": "Why Most Published Research Findings Are False", "cited_abstract": "There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power, bias, the number of other studies on the same question, and, importantly, the ratio of true to not-true relationships among those probed in...", "expected_intent": "supporting", "expected_misrepresents": true, "note": "MISREPRESENTS: A classic direct inversion. Ioannidis's paper argues most findings are FALSE; the citing sentence claims the paper confirmed findings are reliable. The cue 'confirmed' triggers SUPPORTING intent but the claim directly contradicts the paper's thesis."}
19
+ {"id": "l1_pair_019", "split": "l1_intent", "category": "all_mentioning", "signal": "none", "citing_sentence": "The field of information retrieval has a rich history of work on relevance ranking [19], document clustering [20], and query expansion [21].", "citation_key": "19", "cited_doi": "10.1145/312624.312679", "cited_title": "Okapi BM25: A Non-Binary Model of Document Indexing", "cited_abstract": "BM25 is a ranking function used by search engines to rank matching documents according to their relevance to a given search query. It is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document.", "expected_intent": "mentioning", "expected_misrepresents": false, "note": "ALL_MENTIONING document case (first of two). All three citations [19], [20], [21] in this sentence are pure background enumerations. Tests the all_citations_mentioning signal which fires when every citation in a document is mentioning-only. In a document where [19], [20], [21] are the only citations and all are background, the signal should fire."}
20
+ {"id": "l1_pair_020", "split": "l1_intent", "category": "all_mentioning", "signal": "none", "citing_sentence": "Early neural approaches to text classification included convolutional networks [22] and recurrent architectures [23], each building on earlier statistical methods.", "citation_key": "22", "cited_doi": "10.18653/v1/D14-1181", "cited_title": "Convolutional Neural Networks for Sentence Classification", "cited_abstract": "We report on a series of experiments with convolutional neural networks trained on top of pre-trained word vectors for sentence-level classification tasks. With little hyperparameter tuning, the system achieves good results on multiple benchmarks, suggesting that the pre-trained vectors are good,...", "expected_intent": "mentioning", "expected_misrepresents": false, "note": "ALL_MENTIONING document case (second of two). Combined with l1_pair_019, if these are the only citations in a document, all_citations_mentioning fires. Tests that the signal correctly identifies a paper that never cites evidence, only lists prior work."}
l2_nli.jsonl ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"id": "l2_pair_001", "split": "l2_nli", "category": "entailment", "signal": "none", "claim": "BERT achieves state-of-the-art results on eleven NLP benchmarks including GLUE and SQuAD 2.0 by fine-tuning a single pre-trained model.", "source_text": "We introduce a new language representation model called BERT. BERT obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5%, MultiNLI accuracy to 86.7%, SQuAD v1.1 F1 to 93.2, and SQuAD v2.0 F1 to 83.1.", "expected_relation": "entailment", "domain": "cs", "note": "Clear entailment. Claim accurately paraphrases the abstract's core finding. Fine-tuning a single model to achieve SOTA on multiple benchmarks is the BERT paper's central contribution."}
2
+ {"id": "l2_pair_002", "split": "l2_nli", "category": "entailment", "signal": "none", "claim": "Dropout applied during training improves generalization of deep neural networks by preventing co-adaptation of feature detectors.", "source_text": "We describe a technique called Dropout that addresses the problem of overfitting in neural networks. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different thinned networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights.", "expected_relation": "entailment", "domain": "cs", "note": "Entailment. The claim restates the mechanism (preventing co-adaptation) and effect (improved generalization) accurately."}
3
+ {"id": "l2_pair_003", "split": "l2_nli", "category": "entailment", "signal": "none", "claim": "Among patients with resectable non-small-cell lung cancer, adjuvant chemotherapy with cisplatin-based regimens significantly improves five-year overall survival compared to surgery alone.", "source_text": "We performed a meta-analysis of randomized trials comparing cisplatin-based adjuvant chemotherapy to observation after complete surgical resection in patients with non-small-cell lung cancer. Adjuvant chemotherapy was associated with a significant improvement in overall survival (HR 0.89; 95% CI 0.82–0.96; p=0.005), corresponding to an absolute benefit of 5.4% at 5 years (from 60.4% to 65.8%).", "expected_relation": "entailment", "domain": "biomedical", "note": "Entailment. Claim correctly summarizes the meta-analytic finding of significant OS benefit. The 5-year frame and 'surgery alone' comparator are accurate."}
4
+ {"id": "l2_pair_004", "split": "l2_nli", "category": "entailment", "signal": "none", "claim": "Stereotype threat impairs the performance of negatively stereotyped groups on standardized tests.", "source_text": "When a negative intellectual stereotype about one's group could be applied to oneself in a testing situation, we hypothesized that this would interfere with intellectual functioning. In three experiments, Black students who were told that a test was diagnostic of intellectual ability performed significantly worse than Black students who were told the test was not diagnostic. White students showed no such pattern. These results suggest that stereotype threat — the risk of confirming a negative stereotype about one's group — can depress the intellectual performance of group members.", "expected_relation": "entailment", "domain": "psychology", "note": "Entailment. The claim accurately captures Steele & Aronson's core finding. 'Negatively stereotyped groups' and 'standardized tests' match the experimental setup."}
5
+ {"id": "l2_pair_005", "split": "l2_nli", "category": "entailment", "signal": "none", "claim": "Socioeconomic status in childhood is a significant predictor of adult health outcomes, independent of adult socioeconomic position.", "source_text": "This study examined the relationship between childhood socioeconomic circumstances and adult health using data from a prospective cohort (n=11,441). After adjusting for adult socioeconomic position, education, and health behaviors, childhood poverty was independently associated with higher rates of cardiovascular disease (OR 1.38, 95% CI 1.19–1.60), diabetes (OR 1.24, 95% CI 1.08–1.43), and all-cause mortality (HR 1.31, 95% CI 1.15–1.49) in middle adulthood.", "expected_relation": "entailment", "domain": "social_science", "note": "Entailment. Claim correctly states independent effect of childhood SES. Source explicitly adjusts for adult SES, confirming the independent pathway."}
6
+ {"id": "l2_pair_006", "split": "l2_nli", "category": "entailment", "signal": "none", "claim": "Transformer models pre-trained on large corpora can be fine-tuned with very few labeled examples to achieve competitive performance on downstream NLP tasks.", "source_text": "We demonstrate that pre-trained language model representations can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. In low-resource settings with as few as 100 labeled examples, fine-tuned BERT models outperform task-specific models trained on the full dataset.", "expected_relation": "entailment", "domain": "cs", "note": "Entailment. 'Very few labeled examples' aligns with the source's 'as few as 100 labeled examples' and 'low-resource settings'. The performance advantage is confirmed."}
7
+ {"id": "l2_pair_007", "split": "l2_nli", "category": "contradiction", "signal": "claim_contradicted", "claim": "Adding aspirin to standard anticoagulation therapy significantly reduces the risk of recurrent venous thromboembolism by approximately 30% in patients who have completed initial treatment.", "source_text": "In a randomized, double-blind trial of 822 patients with unprovoked venous thromboembolism who had completed initial anticoagulation, we compared aspirin 100 mg daily with placebo. The rate of recurrent VTE was 6.6% per year in the aspirin group and 11.2% per year in the placebo group (HR 0.58, 95% CI 0.36–0.93). However, when we adjusted for aspirin's antiplatelet effect on arterial events, the absolute benefit on VTE recurrence alone was 1.3% per year and did not reach statistical significance (p=0.18) after sensitivity analysis excluding arterial events.", "expected_relation": "contradiction", "domain": "biomedical", "note": "CONTRADICTION. The claim states a '30% reduction' that is 'significant'. The source shows 42% relative reduction in crude rate (HR 0.58) but the VTE-specific effect did not reach significance after sensitivity analysis. The claim overstates the magnitude and confidence. Subtly plausible because the crude HR is indeed ~0.58."}
8
+ {"id": "l2_pair_008", "split": "l2_nli", "category": "contradiction", "signal": "claim_contradicted", "claim": "Multitasking improves productivity by allowing workers to complete multiple tasks in parallel, as workers who multitask report higher job satisfaction and output.", "source_text": "We conducted a controlled experiment in which 128 participants completed cognitive tasks under single-task and multitask conditions. Multitasking increased the time required to complete both tasks by an average of 40% compared to sequential processing. Self-reported satisfaction was higher in the multitasking condition despite objectively lower performance. Workers who believed they were multitasking effectively showed systematic overconfidence in their output quality.", "expected_relation": "contradiction", "domain": "psychology", "note": "CONTRADICTION. The claim asserts multitasking improves productivity. The source finds 40% longer completion times (productivity decrease). The source does note higher self-reported satisfaction, which a selective reader might use to construct the misleading claim — a cherry-pick combined with inversion."}
9
+ {"id": "l2_pair_009", "split": "l2_nli", "category": "contradiction", "signal": "claim_contradicted", "claim": "Social media use in adolescents is causally linked to depression, with each additional hour of use per day increasing depressive symptom scores by a clinically meaningful margin.", "source_text": "Using data from the UK Millennium Cohort Study (n=10,894 adolescents aged 14), we examined the longitudinal relationship between social media use and depressive symptoms. After controlling for baseline depression, family SES, and peer relationships, social media use at age 14 was associated with depressive symptoms at age 16 (β=0.06, 95% CI 0.02–0.10). Effect sizes were small and did not meet conventional thresholds for clinical significance (d<0.1). No dose-response relationship was observed for boys. We conclude that social media's role in adolescent depression is modest and likely not causal.", "expected_relation": "contradiction", "domain": "psychology", "note": "CONTRADICTION. The claim asserts a causal link and 'clinically meaningful' increase per hour. The source explicitly states effect sizes are below clinical significance thresholds, no dose-response for boys, and 'likely not causal'. A common misrepresentation in media coverage of correlational cohort studies."}
10
+ {"id": "l2_pair_010", "split": "l2_nli", "category": "contradiction", "signal": "claim_contradicted", "claim": "Retrieval-augmented generation eliminates hallucination in large language models by grounding responses in retrieved document passages.", "source_text": "We introduce Retrieval-Augmented Generation (RAG), which combines pre-trained parametric and non-parametric memory for language generation. RAG reduces hallucination compared to parametric-only models: on TriviaQA, RAG achieves 68.0% EM vs 52.1% for GPT-2. However, RAG does not eliminate hallucination entirely. When retrieved documents are irrelevant or out-of-date, the model can still generate factually incorrect statements. Hallucination rates remain non-trivial even with retrieval augmentation.", "expected_relation": "contradiction", "domain": "cs", "note": "CONTRADICTION. The claim says RAG 'eliminates' hallucination. Source explicitly states it does not eliminate it and that non-trivial hallucination remains. The word 'eliminates' vs 'reduces' is the crux — a subtle but consequential distortion common in deployment-focused papers citing RAG."}
11
+ {"id": "l2_pair_011", "split": "l2_nli", "category": "contradiction", "signal": "claim_contradicted", "claim": "The minimum wage increases studied by Dube et al. reduced employment in the restaurant sector by approximately 2–3% per 10% wage increase.", "source_text": "We use county-pair differences to control for regional heterogeneity in examining the effects of minimum wage increases on restaurant employment. Comparing counties on either side of state borders, we find that minimum wage increases have no discernible employment effect in the restaurant sector. The estimated employment elasticity is -0.01 (SE 0.07), not statistically distinguishable from zero. Our results contradict the consensus view of approximately 1–2% employment loss per 10% wage increase.", "expected_relation": "contradiction", "domain": "social_science", "note": "CONTRADICTION. The claim attributes a '2–3% loss' figure to Dube et al. The actual Dube, Lester & Reich (2010) paper finds near-zero employment effects (elasticity ≈ -0.01) and explicitly contradicts the elasticity the claim attributes to them."}
12
+ {"id": "l2_pair_012", "split": "l2_nli", "category": "neutral", "signal": "claim_unsupported", "claim": "The Adam optimizer outperforms SGD with momentum on all image classification tasks when training from scratch on ImageNet.", "source_text": "We propose Adam, a stochastic optimization algorithm that computes adaptive learning rates for each parameter using estimates of first and second moments of the gradients. Empirically, we demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods on logistic regression, multilayer fully connected neural networks, and recurrent neural networks, including language modeling and speech recognition.", "expected_relation": "neutral", "domain": "cs", "note": "NEUTRAL/UNSUPPORTED. The claim makes a universal assertion about image classification and ImageNet specifically. The Adam paper's benchmarks cover logistic regression, MLPs, and RNNs but do NOT include ImageNet-scale CNN training — the domain where SGD often outperforms Adam in practice."}
13
+ {"id": "l2_pair_013", "split": "l2_nli", "category": "neutral", "signal": "claim_unsupported", "claim": "Meditation interventions reduce cortisol levels in healthy adults by at least 20% after eight weeks of practice.", "source_text": "We conducted a randomized controlled trial of mindfulness-based stress reduction (MBSR) in 89 healthy adults. After 8 weeks, participants in the MBSR group showed significant reductions in self-reported stress and anxiety (Cohen's d=0.58) compared to waitlist controls. No blood samples were collected; cortisol was not measured in this study.", "expected_relation": "neutral", "domain": "biomedical", "note": "NEUTRAL/UNSUPPORTED. The claim specifically concerns cortisol levels, a biomarker not measured in the source study. The source measures subjective stress and anxiety only. The claim is neither supported nor contradicted — it addresses a variable the study did not examine."}
14
+ {"id": "l2_pair_014", "split": "l2_nli", "category": "neutral", "signal": "claim_unsupported", "claim": "Large language models exhibit in-context learning primarily because they perform implicit Bayesian inference over possible programs.", "source_text": "We study in-context learning (ICL) in large language models, where a model performs a task given a few demonstrations without parameter updates. We show that ICL performance scales predictably with model size and number of demonstrations. Models with more than 100B parameters show emergent ICL capabilities on tasks where smaller models fail entirely. The mechanism underlying ICL remains an open question.", "expected_relation": "neutral", "domain": "cs", "note": "NEUTRAL/UNSUPPORTED. The claim asserts a specific mechanistic explanation (Bayesian inference over programs). The source describes the phenomenon and its scaling properties but explicitly states the mechanism is an open question. The causal claim goes beyond what the source establishes."}
15
+ {"id": "l2_pair_015", "split": "l2_nli", "category": "neutral", "signal": "claim_unsupported", "claim": "Exposure to green spaces in urban environments reduces violent crime rates by displacing potential offenders to public parks.", "source_text": "A longitudinal study of 21 U.S. cities found that increases in urban green space were associated with reductions in violent crime (β=-0.14, p<0.05) over a 10-year period. The association persisted after controlling for population density, income, and police presence. The authors note that the causal mechanism is unknown, and multiple pathways including social cohesion, temperature reduction, and territorial reclamation are plausible.", "expected_relation": "neutral", "domain": "social_science", "note": "NEUTRAL/UNSUPPORTED. The claim attributes a specific mechanism ('displacing potential offenders to parks') that the source paper does not identify. The source reports the association and explicitly states the mechanism is unknown. Mechanism imputation is a common form of claim inflation."}
16
+ {"id": "l2_pair_016", "split": "l2_nli", "category": "neutral", "signal": "claim_unsupported", "claim": "The gut microbiome composition in patients with major depressive disorder is indistinguishable from that of healthy controls when matched for diet and BMI.", "source_text": "We performed 16S rRNA sequencing on stool samples from 127 patients with MDD and 123 healthy controls. Patients with MDD showed significant reductions in Faecalibacterium prausnitzii (p=0.002) and Bifidobacterium longum (p=0.004). Importantly, our cohort was not matched for dietary pattern or BMI, which are potential confounders. Whether the observed differences would persist after such matching remains to be determined.", "expected_relation": "neutral", "domain": "biomedical", "note": "NEUTRAL/UNSUPPORTED. The claim asserts equivalence when matched for diet and BMI. The source study explicitly did NOT match for these confounders and flags this as a limitation. The source actually finds differences, but whether those differences survive matching is unresolved."}
17
+ {"id": "l2_pair_017", "split": "l2_nli", "category": "neutral", "signal": "claim_unsupported", "claim": "Transfer learning from ImageNet-pretrained models consistently improves performance across all computer vision tasks, including medical image analysis.", "source_text": "We show that features learned from ImageNet transfer effectively to natural image tasks such as object detection and scene recognition, reducing the need for large task-specific datasets. Transfer learning improved performance by 4–8% on Pascal VOC and COCO benchmarks. Note that these results apply to natural images; transfer to specialized domains such as histopathology or radiology may require domain adaptation due to the substantial visual distribution shift.", "expected_relation": "neutral", "domain": "cs", "note": "EDGE CASE — overgeneralization. The source shows strong transfer benefits for natural image tasks but explicitly caveats that specialized medical domains (histopathology, radiology) require domain adaptation. The claim removes this scope condition and generalizes to 'all computer vision tasks including medical'."}
18
+ {"id": "l2_pair_018", "split": "l2_nli", "category": "neutral", "signal": "claim_unsupported", "claim": "The drug canagliflozin significantly reduces the risk of major adverse cardiovascular events in patients with type 2 diabetes.", "source_text": "The CANVAS trial randomized 10,142 patients with type 2 diabetes and high cardiovascular risk to canagliflozin or placebo. Canagliflozin reduced the composite of major adverse cardiovascular events (MACE: nonfatal MI, nonfatal stroke, or CV death) by 14% (HR 0.86, 95% CI 0.75–0.97; p=0.02). However, canagliflozin was associated with a nearly doubled risk of lower-limb amputation (HR 1.97, 95% CI 1.41–2.75; p<0.001). The benefit-risk profile requires careful consideration in patients at high amputation risk.", "expected_relation": "neutral", "domain": "biomedical", "note": "EDGE CASE — cherry-picking. The claim is technically supported for MACE (HR 0.86, p=0.02). However, by omitting the doubled amputation risk, the claim presents a selective and misleading picture of the drug's profile. This tests whether the signal fires for incomplete, positively-biased representation of mixed results."}
19
+ {"id": "l2_pair_019", "split": "l2_nli", "category": "neutral", "signal": "claim_unsupported", "claim": "The best-performing model on the SuperGLUE benchmark is T5-11B, which achieved a score of 89.3, surpassing the human baseline.", "source_text": "We introduce the T5 (Text-to-Text Transfer Transformer) framework, which unifies NLP tasks under a text-to-text format. Our largest model, T5-11B, achieves 89.3 on SuperGLUE, surpassing the human baseline of 89.8 on several subtasks while matching it overall. At the time of submission (October 2019), T5-11B holds the top position on the SuperGLUE leaderboard.", "expected_relation": "neutral", "domain": "cs", "note": "EDGE CASE — outdated claim. At the time of the T5 paper (2019), the claim was accurate. By 2021+ many models surpassed T5-11B on SuperGLUE. A paper citing this in 2024 as the 'best-performing model' would be using an outdated claim. The source itself qualifies: 'at the time of submission'. Tests whether temporal context flags the currency problem."}
20
+ {"id": "l2_pair_020", "split": "l2_nli", "category": "entailment", "signal": "none", "claim": "The Transformer architecture, which relies exclusively on self-attention and dispensing with recurrence, achieves superior results on machine translation compared to recurrent sequence models.", "source_text": "We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show the Transformer generalizes well to other tasks by applying it successfully, achieving 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU.", "expected_relation": "entailment", "domain": "cs", "note": "EDGE CASE — paraphrase entailment. The claim rephrases the abstract in different words: 'relies exclusively on self-attention' paraphrases 'based solely on attention mechanisms', 'dispensing with recurrence' is near-verbatim, and 'superior results on machine translation' paraphrases the BLEU improvement. Tests that the NLI model handles paraphrase as entailment rather than flagging it as suspicious."}
reference_verification.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
retracted_papers.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
signal_unit_tests.jsonl ADDED
The diff for this file is too large to render. See raw diff