SkyWhal3 commited on
Commit
f3e2a6d
·
verified ·
1 Parent(s): 36e483a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +91 -2
README.md CHANGED
@@ -25,7 +25,7 @@ tags:
25
  - crispr
26
  - cas9
27
  - open-data
28
- - nstruction-tuning
29
  pretty_name: STXBP1 ClinVar Curated Variants
30
  size_categories:
31
  - 10M<n<100M
@@ -151,4 +151,93 @@ _Main split for Hugging Face: JSONL format (see above for statistics)._
151
  "onc_fields": {},
152
  "sci_fields": {},
153
  "incl_fields": {}
154
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
  - crispr
26
  - cas9
27
  - open-data
28
+ - instruction-tuning
29
  pretty_name: STXBP1 ClinVar Curated Variants
30
  size_categories:
31
  - 10M<n<100M
 
151
  "onc_fields": {},
152
  "sci_fields": {},
153
  "incl_fields": {}
154
+ }
155
+
156
+
157
+
158
+ ===================================================================================================================
159
+ ## You can easily load this dataset using the 🤗 Datasets library.
160
+
161
+ The Hugging Face infrastructure will automatically use the efficient Parquet files by default, but you can also specify the JSONL if you prefer.
162
+
163
+ ### Install dependencies (if needed):
164
+
165
+ ```bash
166
+ pip install datasets```
167
+
168
+
169
+
170
+ ## Load the full dataset (Parquet, recommended)
171
+
172
+ ```from datasets import load_dataset
173
+
174
+ # This will automatically use the Parquet shards
175
+ ds = load_dataset("SkyWhal3/ClinVar-STXBP1-NLP-Dataset")
176
+
177
+ # Access examples
178
+ print(ds["train"][0])```
179
+
180
+
181
+
182
+ ## To force JSONL loading (if you prefer the original format):
183
+
184
+ ```from datasets import load_dataset
185
+
186
+ # Specify data_files to point to JSONL file(s)
187
+ ds = load_dataset(
188
+ "SkyWhal3/ClinVar-STXBP1-NLP-Dataset",
189
+ data_files="ClinVar-STXBP1-NLP-Dataset.jsonl",
190
+ split="train"
191
+ )
192
+ print(ds[0])```
193
+
194
+
195
+ ## Other ways to use the data
196
+ Load all Parquet shards with pandas
197
+
198
+ ```import pandas as pd
199
+ import glob
200
+
201
+ # Load all Parquet shards in the train directory
202
+ parquet_files = glob.glob("default/train/*.parquet")
203
+ df = pd.concat([pd.read_parquet(pq) for pq in parquet_files], ignore_index=True)
204
+ print(df.shape)
205
+ print(df.head())```
206
+
207
+
208
+ ## Filter for a gene (e.g., STXBP1)
209
+
210
+ ```df = pd.read_parquet("default/train/0000.parquet")
211
+ stxbp1_df = df[df["gene"] == "STXBP1"]
212
+ print(stxbp1_df.head())```
213
+
214
+
215
+ ## Randomly sample a subset
216
+
217
+ ```sample = df.sample(n=5, random_state=42)
218
+ print(sample)```
219
+
220
+
221
+ ## Load with Polars (for high performance)
222
+
223
+ ```import polars as pl
224
+
225
+ df = pl.read_parquet("default/train/0000.parquet")
226
+ print(df.head())```
227
+
228
+
229
+ ## Query with DuckDB (SQL-style)
230
+
231
+ ```import duckdb
232
+
233
+ con = duckdb.connect()
234
+ df = con.execute("SELECT * FROM 'default/train/0000.parquet' WHERE gene='STXBP1' LIMIT 5").df()
235
+ print(df)```
236
+
237
+
238
+ ```## Streaming mode with 🤗 Datasets
239
+
240
+
241
+ ds = load_dataset("SkyWhal3/ClinVar-STXBP1-NLP-Dataset", split="train", streaming=True)
242
+ for record in ds.take(5):
243
+ print(record)```