parquet-converter commited on
Commit
84f247e
1 Parent(s): 176a3f1

Update parquet files

Browse files
README.md DELETED
@@ -1,122 +0,0 @@
1
- # Dataset Card for LegalCaseDocumentSummarization
2
-
3
- ## Table of Contents
4
- - [Table of Contents](#table-of-contents)
5
- - [Dataset Description](#dataset-description)
6
- - [Dataset Summary](#dataset-summary)
7
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
8
- - [Languages](#languages)
9
- - [Dataset Structure](#dataset-structure)
10
- - [Data Instances](#data-instances)
11
- - [Data Fields](#data-fields)
12
- - [Data Splits](#data-splits)
13
- - [Dataset Creation](#dataset-creation)
14
- - [Curation Rationale](#curation-rationale)
15
- - [Source Data](#source-data)
16
- - [Annotations](#annotations)
17
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
18
- - [Considerations for Using the Data](#considerations-for-using-the-data)
19
- - [Social Impact of Dataset](#social-impact-of-dataset)
20
- - [Discussion of Biases](#discussion-of-biases)
21
- - [Other Known Limitations](#other-known-limitations)
22
- - [Additional Information](#additional-information)
23
- - [Dataset Curators](#dataset-curators)
24
- - [Licensing Information](#licensing-information)
25
- - [Citation Information](#citation-information)
26
- - [Contributions](#contributions)
27
-
28
- ## Dataset Description
29
-
30
- - **Homepage:** [GitHub](https://github.com/Law-AI/summarization)
31
- - **Repository:** [Zenodo](https://zenodo.org/record/7152317#.Y69PkeKZODW)
32
- - **Paper:**
33
- - **Leaderboard:**
34
- - **Point of Contact:**
35
-
36
- ### Dataset Summary
37
-
38
- [More Information Needed]
39
-
40
- ### Supported Tasks and Leaderboards
41
-
42
- [More Information Needed]
43
-
44
- ### Languages
45
-
46
- [More Information Needed]
47
-
48
- ## Dataset Structure
49
-
50
- ### Data Instances
51
-
52
- [More Information Needed]
53
-
54
- ### Data Fields
55
-
56
- [More Information Needed]
57
-
58
- ### Data Splits
59
-
60
- [More Information Needed]
61
-
62
- ## Dataset Creation
63
-
64
- ### Curation Rationale
65
-
66
- [More Information Needed]
67
-
68
- ### Source Data
69
-
70
- #### Initial Data Collection and Normalization
71
-
72
- [More Information Needed]
73
-
74
- #### Who are the source language producers?
75
-
76
- [More Information Needed]
77
-
78
- ### Annotations
79
-
80
- #### Annotation process
81
-
82
- [More Information Needed]
83
-
84
- #### Who are the annotators?
85
-
86
- [More Information Needed]
87
-
88
- ### Personal and Sensitive Information
89
-
90
- [More Information Needed]
91
-
92
- ## Considerations for Using the Data
93
-
94
- ### Social Impact of Dataset
95
-
96
- [More Information Needed]
97
-
98
- ### Discussion of Biases
99
-
100
- [More Information Needed]
101
-
102
- ### Other Known Limitations
103
-
104
- [More Information Needed]
105
-
106
- ## Additional Information
107
-
108
- ### Dataset Curators
109
-
110
- [More Information Needed]
111
-
112
- ### Licensing Information
113
-
114
- [More Information Needed]
115
-
116
- ### Citation Information
117
-
118
- [More Information Needed]
119
-
120
- ### Contributions
121
-
122
- Thanks to [@JoelNiklaus](https://github.com/JoelNiklaus) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
test.jsonl.xz → joelito--legal_case_document_summarization/json-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b2ba3f5de08de7c70bd1157822b2224ef38b0ba54316267febf0e807f54d89e8
3
- size 2300856
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a1b1f7e295e0dbe9ccec1cc0a68bd34c5ad52a7495bd5722fefd305c89f1300
3
+ size 5669318
train.jsonl.xz → joelito--legal_case_document_summarization/json-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:779c6d28d3cc8b93132514988eb56ee5f464ce803553b4f8c9397c43ed3f5590
3
- size 50587108
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f41e98bc51aa711dfc4ba43f92724ecfbee99f1868508c4c7d7c1be5ed454451
3
+ size 134026239
original_dataset.zip DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6141613c07eb5a16f0c3f0da0aec974bd218ce12d15035b9aba37dda3e7e1b96
3
- size 105247667
 
 
 
 
prepare_data.py DELETED
@@ -1,47 +0,0 @@
1
- import pandas as pd
2
-
3
- import os
4
- from typing import Union
5
-
6
- import datasets
7
- from datasets import load_dataset
8
-
9
-
10
- def save_and_compress(dataset: Union[datasets.Dataset, pd.DataFrame], name: str, idx=None):
11
- if idx:
12
- path = f"{name}_{idx}.jsonl"
13
- else:
14
- path = f"{name}.jsonl"
15
-
16
- print("Saving to", path)
17
- dataset.to_json(path, force_ascii=False, orient='records', lines=True)
18
-
19
- print("Compressing...")
20
- os.system(f'xz -zkf -T0 {path}') # -TO to use multithreading
21
-
22
-
23
- def get_dataset_column_from_text_folder(folder_path):
24
- return load_dataset("text", data_dir=folder_path, sample_by="document", split='train').to_pandas()['text']
25
-
26
-
27
- for split in ["train", "test"]:
28
- dfs = []
29
- for dataset_name in ["IN-Abs", "UK-Abs", "IN-Ext"]:
30
- if dataset_name == "IN-Ext" and split == "test":
31
- continue
32
- print(f"Processing {dataset_name} {split}")
33
- path = f"original_dataset/{dataset_name}/{split}-data"
34
-
35
- df = pd.DataFrame()
36
- df['judgement'] = get_dataset_column_from_text_folder(f"{path}/judgement")
37
- df['dataset_name'] = dataset_name
38
-
39
- if dataset_name == "UK-Abs" and split == "test" or dataset_name == "IN-Ext":
40
- summary_full_path = f"{path}/summary/full"
41
- else:
42
- summary_full_path = f"{path}/summary"
43
- df['summary'] = get_dataset_column_from_text_folder(summary_full_path)
44
- dfs.append(df)
45
- df = pd.concat(dfs)
46
- df = df.fillna("") # NaNs can lead to huggingface not recognizing the feature type of the column
47
- save_and_compress(df, f"data/{split}")