Dataset Preview
Go to dataset viewer
The dataset preview is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    FileNotFoundError
Message:      https://huggingface.co/datasets/EMBO/sd-character-level-ner/resolve/main/sd_character_panels.zip/sd_character_panels/train.jsonl
Traceback:    Traceback (most recent call last):
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 407, in _info
                  await _file_info(
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 788, in _file_info
                  r.raise_for_status()
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/aiohttp/client_reqrep.py", line 1005, in raise_for_status
                  raise ClientResponseError(
              aiohttp.client_exceptions.ClientResponseError: 404, message='Not Found', url=URL('https://huggingface.co/datasets/EMBO/sd-character-level-ner/resolve/main/sd_character_panels.zip/sd_character_panels/train.jsonl')
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/workers/first_rows/src/first_rows/response.py", line 372, in get_first_rows_response
                  rows = get_rows(
                File "/src/workers/first_rows/src/first_rows/utils.py", line 136, in decorator
                  return func(*args, **kwargs)
                File "/src/workers/first_rows/src/first_rows/response.py", line 81, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 782, in __iter__
                  for key, example in self._iter():
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 772, in _iter
                  yield from ex_iterable
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 142, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/EMBO--sd-character-level-ner/43a7969a40e7540954c1f2b77bd880b3c5e6c5794079500e5dc00ee1175af5eb/sd-character-level-ner.py", line 152, in _generate_examples
                  with open(filepath, encoding="utf-8") as f:
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 69, in wrapper
                  return function(*args, use_auth_token=use_auth_token, **kwargs)
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 469, in xopen
                  file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/core.py", line 441, in open
                  return open_files(
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/core.py", line 273, in open_files
                  fs, fs_token, paths = get_fs_token_paths(
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/core.py", line 606, in get_fs_token_paths
                  fs = filesystem(protocol, **inkwargs)
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/registry.py", line 284, in filesystem
                  return cls(**storage_options)
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 76, in __call__
                  obj = super().__call__(*args, **kwargs)
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/implementations/zip.py", line 53, in __init__
                  self.fo = fo.__enter__()  # the whole instance is a context
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/core.py", line 103, in __enter__
                  f = self.fs.open(self.path, mode=mode)
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 1094, in open
                  f = self._open(
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 346, in _open
                  size = size or self.info(path, **kwargs)["size"]
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 111, in wrapper
                  return sync(self.loop, func, *args, **kwargs)
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 96, in sync
                  raise return_result
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 53, in _runner
                  result[0] = await coro
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 420, in _info
                  raise FileNotFoundError(url) from exc
              FileNotFoundError: https://huggingface.co/datasets/EMBO/sd-character-level-ner/resolve/main/sd_character_panels.zip/sd_character_panels/train.jsonl

Need help to make the dataset viewer work? Open an discussion for direct support.

YAML Metadata Warning: The task_categories "structure-prediction" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, conversational, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, automatic-speech-recognition, audio-to-audio, audio-classification, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, visual-question-answering, document-question-answering, zero-shot-image-classification, other

Dataset Card for sd-nlp

Dataset Summary

This dataset is based on the content of the SourceData (https://sourcedata.embo.org) database, which contains manually annotated figure legends written in English and extracted from scientific papers in the domain of cell and molecular biology (Liechti et al, Nature Methods, 2017, https://doi.org/10.1038/nmeth.4471). Unlike the dataset sd-nlp, pre-tokenized with the roberta-base tokenizer, this dataset is not previously tokenized, but just splitted into words. Users can therefore use it to fine-tune other models. Additional details at https://github.com/source-data/soda-roberta

Supported Tasks and Leaderboards

Tags are provided as IOB2-style tags. PANELIZATION: figure captions (or figure legends) are usually composed of segments that each refer to one of several 'panels' of the full figure. Panels tend to represent results obtained with a coherent method and depicts data points that can be meaningfully compared to each other. PANELIZATION provide the start (B-PANEL_START) of these segments and allow to train for recogntion of the boundary between consecutive panel lengends. NER: biological and chemical entities are labeled. Specifically the following entities are tagged:

  • SMALL_MOLECULE: small molecules
  • GENEPROD: gene products (genes and proteins)
  • SUBCELLULAR: subcellular components
  • CELL: cell types and cell lines.
  • TISSUE: tissues and organs
  • ORGANISM: species
  • EXP_ASSAY: experimental assays ROLES: the role of entities with regard to the causal hypotheses tested in the reported results. The tags are:
  • CONTROLLED_VAR: entities that are associated with experimental variables and that subjected to controlled and targeted perturbations.
  • MEASURED_VAR: entities that are associated with the variables measured and the object of the measurements. BORING: entities are marked with the tag BORING when they are more of descriptive value and not directly associated with causal hypotheses ('boring' is not an ideal choice of word, but it is short...). Typically, these entities are so-called 'reporter' geneproducts, entities used as common baseline across samples, or specify the context of the experiment (cellular system, species, etc...).

Languages

The text in the dataset is English.

Dataset Structure

Data Instances

{'text': '(E) Quantification of the number of cells without γ-Tubulin at centrosomes (γ-Tub -) in pachytene and diplotene spermatocytes in control, Plk1(∆/∆) and BI2536-treated spermatocytes. Data represent average of two biological replicates per condition. ',
 'labels': [0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  13,
  14,
  14,
  14,
  14,
  14,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  3,
  4,
  4,
  4,
  4,
  4,
  4,
  4,
  4,
  0,
  0,
  0,
  0,
  5,
  6,
  6,
  6,
  6,
  6,
  6,
  6,
  6,
  6,
  6,
  0,
  0,
  3,
  4,
  4,
  4,
  4,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  7,
  8,
  8,
  8,
  8,
  8,
  8,
  8,
  8,
  8,
  8,
  8,
  8,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  3,
  4,
  4,
  4,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  1,
  2,
  2,
  2,
  2,
  2,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  7,
  8,
  8,
  8,
  8,
  8,
  8,
  8,
  8,
  8,
  8,
  8,
  8,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0,
  0]}

Data Fields

  • text: str of the text
  • label_ids dictionary composed of list of strings on a character-level:
    • entity_types: list of strings for the IOB2 tags for entity type; possible value in ["O", "I-SMALL_MOLECULE", "B-SMALL_MOLECULE", "I-GENEPROD", "B-GENEPROD", "I-SUBCELLULAR", "B-SUBCELLULAR", "I-CELL", "B-CELL", "I-TISSUE", "B-TISSUE", "I-ORGANISM", "B-ORGANISM", "I-EXP_ASSAY", "B-EXP_ASSAY"]
    • panel_start: list of strings for IOB2 tags ["O", "B-PANEL_START"]

Data Splits

  DatasetDict({
      train: Dataset({
          features: ['text', 'labels'],
          num_rows: 66085
      })
      test: Dataset({
          features: ['text', 'labels'],
          num_rows: 8225
      })
      validation: Dataset({
          features: ['text', 'labels'],
          num_rows: 7948
      })
  })

Dataset Creation

Curation Rationale

The dataset was built to train models for the automatic extraction of a knowledge graph based from the scientific literature. The dataset can be used to train character-based models for text segmentation and named entity recognition.

Source Data

Initial Data Collection and Normalization

Figure legends were annotated according to the SourceData framework described in Liechti et al 2017 (Nature Methods, 2017, https://doi.org/10.1038/nmeth.4471). The curation tool at https://curation.sourcedata.io was used to segment figure legends into panel legends, tag enities, assign experiemental roles and normalize with standard identifiers (not available in this dataset). The source data was downloaded from the SourceData API (https://api.sourcedata.io) on 21 Jan 2021.

Who are the source language producers?

The examples are extracted from the figure legends from scientific papers in cell and molecular biology.

Annotations

Annotation process

The annotations were produced manually with expert curators from the SourceData project (https://sourcedata.embo.org)

Who are the annotators?

Curators of the SourceData project.

Personal and Sensitive Information

None known.

Considerations for Using the Data

Social Impact of Dataset

Not applicable.

Discussion of Biases

The examples are heavily biased towards cell and molecular biology and are enriched in examples from papers published in EMBO Press journals (https://embopress.org)

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

Thomas Lemberger, EMBO.

Licensing Information

CC BY 4.0

Citation Information

[More Information Needed]

Contributions

Thanks to @tlemberger and @drAbreu for adding this dataset.

Downloads last month
1
Edit dataset card
Evaluate models HF Leaderboard