Datasets:
sent_comp

Task Categories: other
Languages: English
Multilinguality: monolingual
Size Categories: 100K<n<1M
Language Creators: found
Annotations Creators: machine-generated
Source Datasets: original
Licenses: unknown
Dataset Preview
Go to dataset viewer
The dataset preview is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    FileNotFoundError
Message:      [Errno 2] No such file or directory: 'https://github.com/google-research-datasets/sentence-compression/raw/master/data/comp-data.eval.json.gz'
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/responses/first_rows.py", line 337, in get_first_rows_response
                  rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)
                File "/src/services/worker/src/worker/utils.py", line 123, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/responses/first_rows.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 718, in __iter__
                  for key, example in self._iter():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 708, in _iter
                  yield from ex_iterable
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 112, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/sent_comp/512501fef5db888ec620cb9e4943420ea7c7c244c60de9222fb50bca1232f4b5/sent_comp.py", line 136, in _generate_examples
                  with gzip.open(filepath, mode="rt", encoding="utf-8") as f:
                File "/usr/local/lib/python3.9/gzip.py", line 58, in open
                  binary_file = GzipFile(filename, gz_mode, compresslevel)
                File "/usr/local/lib/python3.9/gzip.py", line 173, in __init__
                  fileobj = self.myfileobj = builtins.open(filename, mode or 'rb')
              FileNotFoundError: [Errno 2] No such file or directory: 'https://github.com/google-research-datasets/sentence-compression/raw/master/data/comp-data.eval.json.gz'

Need help to make the dataset viewer work? Open an discussion for direct support.

Dataset Card for Google Sentence Compression

Dataset Summary

A major challenge in supervised sentence compression is making use of rich feature representations because of very scarce parallel data. We address this problem and present a method to automatically build a compression corpus with hundreds of thousands of instances on which deletion-based algorithms can be trained. In our corpus, the syntactic trees of the compressions are subtrees of their uncompressed counterparts, and hence supervised systems which require a structural alignment between the input and output can be successfully trained. We also extend an existing unsupervised compression method with a learning module. The new system uses structured prediction to learn from lexical, syntactic and other features. An evaluation with human raters shows that the presented data harvesting method indeed produces a parallel corpus of high quality. Also, the supervised system trained on this corpus gets high scores both from human raters and in an automatic evaluation setting, significantly outperforming a strong baseline.

Supported Tasks and Leaderboards

[More Information Needed]

Languages

English

Dataset Structure

Data Instances

Each data instance should contains the information about the original sentence in instance["graph"]["sentence"] as well as the compressed sentence in instance["compression"]["text"]. As this dataset was created by pruning dependency connections, the author also includes the dependency tree and transformed graph of the original sentence and compressed sentence.

Data Fields

Each instance should contains these information:

  • graph (Dict): the transformation graph/tree for extracting compression (a modified version of a dependency tree).
    • This will have features similar to a dependency tree (listed bellow)
  • compression (Dict)
    • text (str)
    • edge (List)
  • headline (str): the headline of the original news page.
  • compression_ratio (float): the ratio between compressed sentence vs original sentence.
  • doc_id (str): url of the original news page.
  • source_tree (Dict): the original dependency tree (features listed bellow).
  • compression_untransformed (Dict)
    • text (str)
    • edge (List)

Dependency tree features:

  • id (str)
  • sentence (str)
  • node (List): list of nodes, each node represent a word/word phrase in the tree.
    • form (string)
    • type (string): the enity type of a node. Defaults to "" if it's not an entity.
    • mid (string)
    • word (List): list of words the node contains.
      • id (int)
      • form (str): the word from the sentence.
      • stem (str): the stemmed/lemmatized version of the word.
      • tag (str): dependency tag of the word.
    • gender (int)
    • head_word_index (int)
  • edge: list of the dependency connections between words.
    • parent_id (int)
    • child_id (int)
    • label (str)
  • entity_mention list of the entities in the sentence.
    • start (int)
    • end (int)
    • head (str)
    • name (str)
    • type (str)
    • mid (str)
    • is_proper_name_entity (bool)
    • gender (int)

Data Splits

[More Information Needed]

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

[More Information Needed]

Citation Information

[More Information Needed]

Contributions

Thanks to @mattbui for adding this dataset.