ReFactX_data / README.md
rpozzi's picture
Add all the links to the paper versions
ebc175e verified
metadata
license: cc0-1.0
configs:
  - config_name: default
    data_files:
      - split: wikidata
        path: ReFactX_wikidata_facts.bz2
task_categories:
  - question-answering

ReFactX: Scalable Reasoning with Reliable Facts via Constrained Generation Dataset

This dataset contains verbalized facts from Wikidata to simplify reproducing ReFactX. ReFactX is a scalable method that enables Large Language Models (LLMs) to access external knowledge without depending on retrievers or auxiliary models. It utilizes constrained generation with a pre-built prefix-tree index, verbalizing triples from a Knowledge Graph into textual facts, tokenizing them, and indexing them for efficient access. The dataset specifically includes ReFactX_wikidata_facts.bz2, which consists of 800 million textual facts from Wikidata, designed for use in the ReFactX system.

Paper

The Semantic Web - ISWC 2025

Hugging Face Paper Page

Arxiv Preprint

Code

GitHub Repository

Abstract

Knowledge gaps and hallucinations are persistent challenges for Large Language Models (LLMs), which generate unreliable responses when lacking the necessary information to fulfill user instructions. Existing approaches, such as Retrieval-Augmented Generation (RAG) and tool use, aim to address these issues by incorporating external knowledge. Yet, they rely on additional models or services, resulting in complex pipelines, potential error propagation, and often requiring the model to process a large number of tokens. In this paper, we present a scalable method that enables LLMs to access external knowledge without depending on retrievers or auxiliary models. Our approach uses constrained generation with a pre-built prefix-tree index. Triples from a Knowledge Graph are verbalized in textual facts, tokenized, and indexed in a prefix tree for efficient access. During inference, to acquire external knowledge, the LLM generates facts with constrained generation which allows only sequences of tokens that form an existing fact. We evaluate our proposal on Question Answering and show that it scales to large knowledge bases (800 million facts), adapts to domain-specific data, and achieves effective results. These gains come with minimal generation-time overhead.

Sample Usage

To get started with ReFactX, first install the required dependencies:

pip install -r requirements.txt

For quickly trying ReFactX with an in-memory prefix tree (derived from a 31k-facts knowledge base), use the notebook try_refactx.ipynb from the GitHub repository.

To reproduce the experiments described in the paper, refer to the eval.py script in the GitHub repository, replacing INDEX, MODEL, and DATASET according to your specific needs.

Citation

@InProceedings{10.1007/978-3-032-09527-5_16,
      author="Pozzi, Riccardo
      and Palmonari, Matteo
      and Coletta, Andrea
      and Bellomarini, Luigi
      and Lehmann, Jens
      and Vahdati, Sahar",
      editor="Garijo, Daniel
      and Kirrane, Sabrina
      and Salatino, Angelo
      and Shimizu, Cogan
      and Acosta, Maribel
      and Nuzzolese, Andrea Giovanni
      and Ferrada, Sebasti{\'a}n
      and Soulard, Thibaut
      and Kozaki, Kouji
      and Takeda, Hideaki
      and Gentile, Anna Lisa",
      title="ReFactX: Scalable Reasoning with Reliable Facts via Constrained Generation",
      booktitle="The Semantic Web -- ISWC 2025",
      year="2026",
      publisher="Springer Nature Switzerland",
      address="Cham",
      pages="290--308",
      isbn="978-3-032-09527-5",
      doi="10.1007/978-3-032-09527-5_16",
      url="https://doi.org/10.1007/978-3-032-09527-5_16"
}