kdutia commited on
Commit
fa485be
1 Parent(s): 203f307

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -6
README.md CHANGED
@@ -17,7 +17,6 @@ The files in this dataset are as follows:
17
 
18
  - `metadata.csv`: a CSV containing document metadata for each document we have collected. **This metadata may not be the same as what's stored in the source databases** – we have cleaned and added metadata where it's corrupted or missing.
19
  - `full_text.parquet`: a parquet file containing the full text of each document we have parsed. Each row is a text block (paragraph) with all the associated text block and document metadata.
20
- - `full_text.jsonl`: a JSON file containing the same data as the parquet file. **It's recommended to use the parquet file as it stores the data 10x more efficiently.**
21
 
22
  A research tool you can use to view this data and the results of some classifiers run on it is at [gst1.org](https://gst1.org).
23
 
@@ -79,12 +78,12 @@ metadata = pd.read_csv("metadata.csv")
79
 
80
  ### Loading text block data
81
 
82
- As mentioned at the top of this README **the parquet file is recommended over JSON** as it stores the data much more (10x) more efficiently, meaning lower load on your system's memory requirements.
83
 
84
  ``` py
85
- # Reading from parquet
86
- text_blocks = pd.read_parquet("full_text.parquet")
87
 
88
- # Reading from jsonl
89
- text_blocks = pd.read_json("full_text.jsonl", lines=True)
90
  ```
 
17
 
18
  - `metadata.csv`: a CSV containing document metadata for each document we have collected. **This metadata may not be the same as what's stored in the source databases** – we have cleaned and added metadata where it's corrupted or missing.
19
  - `full_text.parquet`: a parquet file containing the full text of each document we have parsed. Each row is a text block (paragraph) with all the associated text block and document metadata.
 
20
 
21
  A research tool you can use to view this data and the results of some classifiers run on it is at [gst1.org](https://gst1.org).
22
 
 
78
 
79
  ### Loading text block data
80
 
81
+ Once loaded into a Huggingface Dataset or Pandas DataFrame object the parquet file can be converted to other formats, e.g. Excel, CSV or JSON.
82
 
83
  ``` py
84
+ # Using huggingface (easiest)
85
+ dataset = load_dataset("ClimatePolicyRadar/global-stocktake-documents")
86
 
87
+ # Using pandas
88
+ text_blocks = pd.read_parquet("full_text.parquet")
89
  ```