pszemraj commited on
Commit
cf7e520
1 Parent(s): 4074eff

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -0
README.md CHANGED
@@ -1,3 +1,64 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ task_categories:
4
+ - summarization
5
+ language:
6
+ - en
7
+ size_categories:
8
+ - 10K<n<100K
9
  ---
10
+
11
+ # scientific_lay_summarisation - PLOS - normalized
12
+
13
+ This dataset contains scientific lay summaries which have been preprocessed using the code provided in this repository. The preprocessing includes fixing punctuation and whitespace issues, and calculating the token length of each text sample using a tokenizer from the T5 model.
14
+ ## Data Cleaning
15
+
16
+ The text in both the "article" and "summary" columns was processed to ensure that punctuation and whitespace were consistent. The `fix_punct_whitespace` function was applied to each text sample to:
17
+ - Remove spaces before punctuation marks (except for parentheses)
18
+ - Add a space after punctuation marks (except for parentheses) if missing
19
+ - Handle spaces around parentheses
20
+ - Add a space after a closing parenthesis if followed by a word or opening parenthesis
21
+ - Handle spaces around quotation marks
22
+ - Handle spaces around single quotes
23
+ - Handle comma in numbers
24
+ ## Tokenization
25
+
26
+ The length of each text sample was calculated in terms of tokens using the T5 tokenizer. The `calculate_token_length` function was used to encode each text sample using the tokenizer and return the number of resulting tokens. The resulting token lengths were added as new columns to the dataframes.
27
+ ## Data Format
28
+
29
+ The resulting processed datasets are saved in separate directories as parquet files. The directories are named according to the dataset and split name, and each directory contains three parquet files for the train, test, and validation splits.
30
+
31
+ The datasets can be loaded using the `pandas` library or using the `datasets` library from the Hugging Face transformers package. The column names and data types are as follows:
32
+ - `article`: the scientific article text (string)
33
+ - `summary`: the lay summary text (string)
34
+ - `article_length`: the length of the article in terms of tokens (int)
35
+ - `summary_length`: the length of the summary in terms of tokens (int)
36
+ ## Usage
37
+
38
+ To use the processed datasets, load the desired parquet file(s) using `pandas` or `datasets`. Here is an example using `pandas`:
39
+
40
+ ```python
41
+
42
+ import pandas as pd
43
+
44
+ # Load a parquet file
45
+ df = pd.read_parquet("scientific_lay_summarisation-plos-norm/train.parquet")
46
+
47
+ # Print the first few rows
48
+ print(df.head())
49
+ ```
50
+
51
+
52
+
53
+ And here is an example using `datasets`:
54
+
55
+ ```python
56
+
57
+ from datasets import load_dataset
58
+
59
+ dataset = load_dataset("pszemraj/scientific_lay_summarisation-plos-norm")
60
+
61
+ # Print the first few samples
62
+ for i in range(5):
63
+ print(dataset[i])
64
+ ```