maom commited on
Commit
b576501
β€’
1 Parent(s): f388a23

Update README.md

Browse files

add usage details

Files changed (1) hide show
  1. README.md +84 -0
README.md CHANGED
@@ -251,6 +251,86 @@ dataset_info:
251
  representative genomes across the microbial tree of life and annotate
252
  them functionally on a per-residue basis.
253
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
254
  ## Dataset Details
255
 
256
  ### Dataset Description
@@ -363,6 +443,10 @@ genome database across the microbial tree of life:
363
 
364
  ### Recommendations
365
 
 
 
 
 
366
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
367
 
368
  {{ bias_recommendations | default("Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.", true)}}
 
251
  representative genomes across the microbial tree of life and annotate
252
  them functionally on a per-residue basis.
253
 
254
+
255
+ ## Quickstart Usage
256
+
257
+ Each subset can be loaded into python using the Huggingface [datasets](https://huggingface.co/docs/datasets/index) library.
258
+ First, from the command line install the `datasets` library
259
+
260
+ $ pip install datasets
261
+
262
+ Optionally set the cache directory, e.g.
263
+
264
+ $ HF_HOME=${HOME}/.cache/huggingface/
265
+ $ export HF_HOME
266
+
267
+ then, from within python load the datasets library
268
+
269
+ >>> import datasets
270
+
271
+ and load one of the `MPI` model, e.g.,
272
+
273
+ >>> dataset_tag = "rosetta_high_quality"
274
+ >>> dataset_models = datasets.load_dataset(
275
+ path = "RosettaCommons/MIP",
276
+ name = f"{dataset_tag}_models",
277
+ data_dir = f"{dataset_tag}_models")
278
+
279
+ and inspecting the loaded dataset
280
+
281
+ >>> dataset_models
282
+ DatasetDict({
283
+ train: Dataset({
284
+ features: ['id', 'pdb', 'Filter_Stage2_aBefore', 'Filter_Stage2_bQuarter', 'Filter_Stage2_cHalf', 'Filter_Stage2_dEnd', 'clashes_bb', 'clashes_total', 'score', 'silent_score', 'time'],
285
+ num_rows: 211069
286
+ })
287
+ })
288
+
289
+ many structure-based pipelines expect a `.pdb` file as input. For example, `frame2seq` takes in a structure
290
+ and generates a sequence for the backbone. The `frame2seq` can be installed using `pip` from the command line:
291
+
292
+ $ pip install frame2seq
293
+
294
+ Then used from within python:
295
+
296
+ >>> from frame2seq import Frame2seqRunner
297
+ >>> runner = Frame2seqRunner()
298
+ >>> runner.design(
299
+ pdb_file = "target.pdb",
300
+ chain_id = "A",
301
+ temperature = 1,
302
+ num_samples = 5000)
303
+
304
+ To run `frame2seq` on each MIP target,
305
+
306
+ >>> for row in dataset_models:
307
+ print(f"Predicting sequences for id = {row$id}")
308
+ pdb = row$pdb
309
+
310
+
311
+ >>> dataset_function_prediction = datasets.load_dataset(
312
+ path = "RosettaCommons/MIP",
313
+ name = f"{dataset_tag}_function_predictions",
314
+ data_dir = f"{dataset_tag}_function_predictions")
315
+ Downloading readme: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 15.4k/15.4k [00:00<00:00, 264kB/s]
316
+ Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 219/219 [00:00<00:00, 1375.51it/s]
317
+ Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 219/219 [13:04<00:00, 3.58s/files]
318
+ Generating train split: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1332900735/1332900735 [13:11<00:00, 1684288.89 examples/s]
319
+ Loading dataset shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 219/219 [01:22<00:00, 2.66it/s]
320
+
321
+ this loads the `>1.3B` function predictions (xxx targets x yyyy terms from the GO and EC ontologies).
322
+ The predictions are stored in long format, but can be easily converted to a wide format using pandas:
323
+
324
+ >>> dataset_function_prediction
325
+
326
+ >>> import pandas
327
+ >>> dataset_function_prediction_wide = pandas.pivot(
328
+ dataset_function_prediction.data['train'].select(['id', 'term_id', 'Y_hat']).to_pandas()
329
+ columns = "term_id",
330
+ index = "id",
331
+ values = "Y_hat")
332
+ >>> dataset_function_prediction_wide[1:3, 1:3]
333
+
334
  ## Dataset Details
335
 
336
  ### Dataset Description
 
443
 
444
  ### Recommendations
445
 
446
+
447
+
448
+
449
+
450
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
451
 
452
  {{ bias_recommendations | default("Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.", true)}}