DharmaBench / README.md
golankai's picture
Update README.md
e380fa1 verified
metadata
license: cc-by-nc-4.0
task_categories:
  - text-classification
  - token-classification
language:
  - sa
  - bo
pretty_name: DharmaBench

Dataset Card for DharmaBench

Dataset Details

Dataset Description

DharmaBench is a multi-task benchmark suite for evaluating large language models (LLMs) on classification and detection tasks in historical Buddhist texts written in Sanskrit and Classical Tibetan.
It contains 13 tasks (6 Sanskrit, 7 Tibetan), with 4 tasks shared across both languages, designed to measure linguistic, cultural, and structural understanding in low-resource, ancient-language contexts.

The benchmark includes tasks such as metaphor and simile detection, quotation detection, verse/prose classification, metre classification, and root-text/commentary alignment. These reflect key challenges faced by philologists, historians of philosophy and religion, and digital humanities researchers studying Buddhist textual traditions. For the exact definition and description of the tasks, please see the repository or the paper.

  • Curated by: Intellexus Project (Kai Golan Hashiloni et al.)
  • Funded by: This study is supported in part by the European Research Council (Intellexus, Project No.101118558).
  • Shared by: Intellexus Project
  • Language(s): Sanskrit (sa), Classical Tibetan (bo)
  • License: CC BY 4.0

Dataset Sources

Uses

Direct Use

DharmaBench can be used to:

  • Evaluate multilingual or low-resource LLMs on culturally and linguistically rich ancient-language data.
  • Benchmark Sanskrit and Classical Tibetan performance across a variety of classification and detection tasks.
  • Support philologists and digital humanists in semi-automating annotation, quotation tracing, or commentary alignment.

Out-of-Scope Use

  • None

Dataset Structure

  • Each task is located under either Sanskrit/ or Tibetan/, with files such as train.json and test.json, based on availability.
  • Each task has a slightly different structure and column.
  • All data is standardized and formatted for text- and token-level tasks.

Dataset Creation

Curation Rationale

The dataset was created to enable systematic benchmarking of LLMs on Sanskrit and Classical Tibetan, languages central to Buddhist textual transmission yet underrepresented in NLP. It supports evaluation of linguistic understanding, structural analysis, and cultural reasoning.

Source Data

Data Collection and Processing

Texts were sourced from public-domain Buddhist corpora, including digitized canonical and commentarial materials. Data were cleaned, normalized, and manually aligned where necessary. Problematic or ambiguous samples were discussed collaboratively and excluded when consensus could not be reached.

Who are the source data producers?

Original texts were produced by Buddhist scholars between the 1st millennium BCE and 19th century CE. Open-source initiatives and Buddhist textual archives prepared digital transcriptions.

Annotations [optional]

Annotation process

Domain experts in Sanskrit and Classical Tibetan studies carried out annotations. Ambiguities and inconsistencies were discussed collaboratively, and annotation guidelines were iteratively refined. Disagreements were resolved through group discussion or by excluding samples when consensus was not possible.

Who are the annotators?

Annotators were scholars and research assistants from the Intellexus Project, with backgrounds in Buddhist studies, linguistics, and computational linguistics.

Personal and Sensitive Information

No personal or sensitive information is contained in the dataset. All texts are historical and in the public domain.

Bias, Risks, and Limitations

The dataset represents canonical and scholastic Buddhist materials and may not generalize to colloquial or modern-language use. Biases inherent in the source texts (e.g., religious, philosophical, or gender-related perspectives) are preserved to maintain their historical authenticity.

Tasks with very short textual inputs can sometimes be resolved through formal cues (e.g., punctuation, structure) rather than deep understanding.

Recommendations

Users should be made aware of the dataset's risks, biases, and limitations. Users should interpret model performance cautiously and avoid overgeneralizing results. DharmaBench is best used for comparative evaluation and fine-tuning in controlled research settings.

Citation

BibTeX:

If you use DharmaBench in your research, please cite:

@inproceedings{hashiloni-etal-2025-dharmabench,
    title = "{D}harma{B}ench: Evaluating Language Models on Buddhist Texts in {S}anskrit and {T}ibetan",
    author = "Hashiloni, Kai Golan  and
      Cohen, Shay  and
      Shina, Asaf  and
      Yang, Jingyi  and
      Zwebner, Orr Meir  and
      Bajetta, Nicola  and
      Bilitski, Guy  and
      Sund{\'e}n, Rebecca  and
      Maduel, Guy  and
      Conlon, Ryan  and
      Barzilai, Ari  and
      Mass, Daniel  and
      Jia, Shanshan  and
      Naaman, Aviv  and
      Choden, Sonam  and
      Jamtsho, Sonam  and
      Qu, Yadi  and
      Isaacson, Harunaga  and
      Wangchuk, Dorji  and
      Fine, Shai  and
      Almogi, Orna  and
      Bar, Kfir",
    editor = "Inui, Kentaro  and
      Sakti, Sakriani  and
      Wang, Haofen  and
      Wong, Derek F.  and
      Bhattacharyya, Pushpak  and
      Banerjee, Biplab  and
      Ekbal, Asif  and
      Chakraborty, Tanmoy  and
      Singh, Dhirendra Pratap",
    booktitle = "Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics",
    month = dec,
    year = "2025",
    address = "Mumbai, India",
    publisher = "The Asian Federation of Natural Language Processing and The Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.ijcnlp-long.114/",
    pages = "2088--2110",
    ISBN = "979-8-89176-298-5",
    abstract = "We assess the capabilities of large language models on tasks involving Buddhist texts written in Sanskrit and Classical Tibetan{---}two typologically distinct, low-resource historical languages. To this end, we introduce DharmaBench, a benchmark suite comprising 13 classification and detection tasks grounded in Buddhist textual traditions: six in Sanskrit and seven in Tibetan, with four shared across both. The tasks are curated from scratch, tailored to the linguistic and cultural characteristics of each language. We evaluate a range of models, from proprietary systems like GPT-4o to smaller, domain-specific open-weight models, analyzing their performance across tasks and languages. All datasets and code are publicly released, under the CC-BY-4 License and the Apache-2.0 License respectively, to support research on historical language processing and the development of culturally inclusive NLP systems."
}

APA:

DharmaBench: Evaluating Language Models on Buddhist Texts in Sanskrit and Tibetan (Hashiloni et al., IJCNLP-AACL 2025)

Dataset Card Authors

Kai Golan Hashiloni (Intellexus Project) With contributions from the Intellexus Sanskrit and Tibetan research teams.

Dataset Card Contact

For questions or contributions: kai.golanhashiloni@post.runi.ac.il