You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

A novel dataset for benchmarking citation worthiness detection task in the American Legal Corpus. For more details about the dataset please refer to the original paper.

Data Fields

  • File Name: the case file to which the sentence belongs.
  • Sentence Number: The sentence number as present in the document.
  • Sentence: The naturally occurring sentence in the text (after preprocessing/removing citation span.)
  • Label: Integer value of ‘0’ or ‘1’. ‘0’ represents that the sentence is not citation worthy whereas ‘1’ represents that the sentence is citation worthy.

Data Splits

Split #datapoints
Train-Small 800,000
Validation-Small 100,000
Test-Small 100,000
Train-Medium 8,000,000
Validation-Medium 1,000,000
Test-Medium 1,000,000
Train-Large 142,588,927
Validation-Large 17,934,940
Test-Large 17,935,336

Small Dataset

from datasets import load_dataset

# get small dataset
dataset = load_dataset("Vidhaan/LegalCitationWorthiness", "small")

Medium Dataset

from datasets import load_dataset

# get medium dataset
dataset = load_dataset("Vidhaan/LegalCitationWorthiness", "medium")

Large Dataset

from datasets import load_dataset

# get large dataset
dataset = load_dataset("Vidhaan/LegalCitationWorthiness", "large")

Citation Information

Contributions

Thanks to @PritishWadhwa, @gitongithub, @khatrimann, @reshma, @dhumketu for adding this dataset

Downloads last month
0
Edit dataset card