The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    ArrowCapacityError
Message:      array cannot contain more than 2147483646 bytes, have 2165651666
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 328, in compute
                  compute_first_rows_from_parquet_response(
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 88, in compute_first_rows_from_parquet_response
                  rows_index = indexer.get_rows_index(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 631, in get_rows_index
                  return RowsIndex(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 512, in __init__
                  self.parquet_index = self._init_parquet_index(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 529, in _init_parquet_index
                  response = get_previous_step_or_raise(
                File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 566, in get_previous_step_or_raise
                  raise CachedArtifactError(
              libcommon.simple_cache.CachedArtifactError: The previous step failed.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 126, in get_rows_or_raise
                  return get_rows(
                File "/src/services/worker/src/worker/utils.py", line 64, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/utils.py", line 103, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1388, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__
                  for key, pa_table in self.generate_tables_fn(**self.kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 89, in _generate_tables
                  for batch_idx, record_batch in enumerate(
                File "pyarrow/_parquet.pyx", line 1366, in iter_batches
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 2165651666

Need help to make the dataset viewer work? Open a discussion for direct support.

The Caselaw Access Project

In collaboration with Ravel Law, Harvard Law Library digitized over 40 million U.S. court decisions consisting of 6.7 million cases from the last 360 years into a dataset that is widely accessible to use. Access a bulk download of the data through the Caselaw Access Project API (CAPAPI): https://case.law/caselaw/

Find more information about accessing state and federal written court decisions of common law through the bulk data service documentation here: https://case.law/docs/

Learn more about the Caselaw Access Project and all of the phenomenal work done by Jack Cushman, Greg Leppert, and Matteo Cargnelutti here: https://case.law/about/

Watch a live stream of the data release here: https://lil.law.harvard.edu/about/cap-celebration/stream

Post-processing

Teraflop AI is excited to help support the Caselaw Access Project and Harvard Library Innovation Lab, in the release of over 6.6 million state and federal court decisions published throughout U.S. history. It is important to democratize fair access to data to the public, legal community, and researchers. This is a processed and cleaned version of the original CAP data.

During the digitization of these texts, there were erroneous OCR errors that occurred. We worked to post-process each of the texts for model training to fix encoding, normalization, repetition, redundancy, parsing, and formatting.

Teraflop AI’s data engine allows for the massively parallel processing of web-scale datasets into cleaned text form. Our one-click deployment allowed us to easily split the computation between 1000s of nodes on our managed infrastructure.

BGE Embeddings

We additionally provide bge-base-en-v1.5 embeddings for the first 512 tokens of each state jurisdiction and federal case law as well as the post-processed documents. Mean pooling and normalization were used for the embeddings.

We used the Sentence Transformers library maintained by Tom Aarsen of Hugging Face to distribute the embedding process across multiple GPUs. Find an example of how to use multiprocessing for embeddings here.

We improved the inference throughput of the embedding process by using Tri Dao’s Flash Attention. Find the Flash Attention repository here.

You can read the research paper on the BGE embedding models by Shitao Xiao and Zheng Liu here.

The code for training BGE embedding models and other great research efforts can be found on GitHub here.

All of the datasets used to train the BGE embedding models are available here

The bge-base-en-v1.5 model weights are available on Hugging Face. The model card provides news, a list of other available models, training, usage, and benchmark information: https://huggingface.co/BAAI/bge-base-en-v1.5

Licensing Information

The Caselaw Access Project dataset is licensed under the CC0 License.

Citation Information

The President and Fellows of Harvard University. "Caselaw Access Project." 2024, https://case.law/
@misc{ccap,
    title={Cleaned Caselaw Access Project},
    author={Enrico Shippole, Aran Komatsuzaki},
    howpublished{\url{https://huggingface.co/datasets/TeraflopAI/Caselaw_Access_Project}},
    year={2024}
}
Downloads last month
2
Edit dataset card