Datasets:

Modalities:
Text
Formats:
parquet
Libraries:
Datasets
Dask
License:
OMG / README.md
andrecornman's picture
Update README.md
5f82e5e verified
|
raw
history blame
1.72 kB
metadata
dataset_info:
  features:
    - name: CDS_position_ids
      sequence: int32
    - name: IGS_position_ids
      sequence: int32
    - name: CDS_ids
      sequence: string
    - name: IGS_ids
      sequence: string
    - name: CDS_seqs
      sequence: large_string
    - name: IGS_seqs
      sequence: large_string
    - name: CDS_orientations
      sequence: bool
  splits:
    - name: train
      num_bytes: 1916402470934
      num_examples: 270640482
  download_size: 1253813127320
  dataset_size: 1916402470934
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Dataset Card for OMG: An Open MetaGenomic Dataset

The OMG is a 3.1T base pair metagenomic pretraining dataset, combining EMBL's MGnify and JGI's IMG databases. The combined data is pre-processed into a mixed-modality dataset, with translated amino acids for protein coding sequences, and nucleic acids for intergenic sequences.

We make two additional datasets available on the HuggingFace Hub:

  • OG: A subset of OMG consisting of high quality genomes with taxonomic information.
  • OMG_prot50: A protein-only dataset generated by clustering OMG at 50% sequence identity, resulting in 207M protein sequences.

See https://github.com/TattaBio/OMG for details and example tokenization script.

Use

import datasets

ds = datasets.load_dataset('tattabio/OMG')

To preview the dataset without downloading, load in streaming mode:

import datasets

ds = datasets.load_dataset('tattabio/OMG', streaming=True)['train']
print(next(iter(ds)))

Citation

BibTeX:

TODO