File size: 1,580 Bytes
75f5ae9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d1cc5aa
 
 
 
 
 
 
 
75f5ae9
335c5fa
 
 
 
d1cc5aa
 
 
335c5fa
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
dataset_info:
  features:
  - name: section
    dtype: string
  - name: filename
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 73734751.97826087
    num_examples: 43
  - name: validation
    num_bytes: 1714761.6739130435
    num_examples: 1
  - name: test
    num_bytes: 3429523.347826087
    num_examples: 2
  download_size: 42120770
  dataset_size: 78879037.00000001
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
  - split: test
    path: data/test-*
license: odc-by
task_categories:
- text-generation
- fill-mask
language:
- en
size_categories:
- n<1K
---

# law books (nougat-small)


A decent chunk of: https://www.survivorlibrary.com/index.php/8-category/173-library-law


<pre>(ki) <font color="#859900"><b></b></font><font color="#2AA198"><b>primerdata-for-LLMs</b></font> python push_dataset_from_text.py /home/pszemraj/Dropbox/programming-projects/primerdata-for-LLMs/utils/output-hf-nougat-space/law -e .md -r BEE-spoke-data/survivorslib-law-books
INFO:__main__:Looking for files with extensions: [&apos;md&apos;]
Processing md files: 100%|███████████████████████████████| 46/46 [00:00&lt;00:00, 778.32it/s]
INFO:__main__:Found 46 text files.
INFO:__main__:Performing train-test split...
INFO:__main__:Performing validation-test split...
INFO:__main__:Train size: 43
INFO:__main__:Validation size: 1
INFO:__main__:Test size: 2
INFO:__main__:Pushing dataset</pre>