pszemraj's picture
Update README.md
d1cc5aa
|
raw
history blame
1.58 kB
metadata
dataset_info:
  features:
    - name: section
      dtype: string
    - name: filename
      dtype: string
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 73734751.97826087
      num_examples: 43
    - name: validation
      num_bytes: 1714761.6739130435
      num_examples: 1
    - name: test
      num_bytes: 3429523.347826087
      num_examples: 2
  download_size: 42120770
  dataset_size: 78879037.00000001
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*
license: odc-by
task_categories:
  - text-generation
  - fill-mask
language:
  - en
size_categories:
  - n<1K

law books (nougat-small)

A decent chunk of: https://www.survivorlibrary.com/index.php/8-category/173-library-law

(ki) primerdata-for-LLMs python push_dataset_from_text.py /home/pszemraj/Dropbox/programming-projects/primerdata-for-LLMs/utils/output-hf-nougat-space/law -e .md -r BEE-spoke-data/survivorslib-law-books
INFO:__main__:Looking for files with extensions: ['md']
Processing md files: 100%|███████████████████████████████| 46/46 [00:00<00:00, 778.32it/s]
INFO:__main__:Found 46 text files.
INFO:__main__:Performing train-test split...
INFO:__main__:Performing validation-test split...
INFO:__main__:Train size: 43
INFO:__main__:Validation size: 1
INFO:__main__:Test size: 2
INFO:__main__:Pushing dataset