albertvillanova's picture
Fix task tags
ecb74d3
|
raw
history blame
1.39 kB
metadata
annotations_creators:
  - expert-generated
language_creators:
  - expert-generated
language:
  - en
license:
  - apache-2.0
multilinguality:
  - monolingual
size_categories:
  - 10K<n<100K
source_datasets:
  - extended|other-MS^2
  - extended|other-Cochrane
task_categories:
  - summarization
  - text2text-generation
task_ids: []
paperswithcode_id: multi-document-summarization
pretty_name: MSLR Shared Task
tags:
  - query-based-summarization
  - query-based-multi-document-summarization
  - scientific-document-summarization

This is a copy of the Cochrane dataset, except the input source documents of its validation split have been replaced by a sparse retriever. The retrieval pipeline used:

  • query: The target field of each example
  • corpus: The union of all documents in the train, validation and test splits. A document is the concatenation of the title and abstract.
  • retriever: BM25 via PyTerrier with default settings
  • top-k strategy: "oracle", i.e. the number of documents retrieved, k, is set as the original number of input documents for each example

Retrieval results on the validation set:

ndcg recall@100 recall@1000 Rprec
0.6241 0.7226 0.8855 0.4025