AnnaWegmann's picture
Update README.md
a194de0 verified
metadata
size_categories:
  - 1K<n<10K
data_files:
  - split: train
    path: train.tsv
  - split: dev
    path: dev.tsv
  - split: test
    path: test.tsv
language:
  - en
pretty_name: ContextDeP

This is ContextDep (Context-Dependent Paraphrases in news interviews), created as described in https://arxiv.org/abs/2404.06670.

This dataset consists of 601 interview turns between a guest and a host of a news interview and 5581 crowd-sourced annotations of whether the host paraphrased the guest. If the annotator classifies a paraphrase, they also provided spans of words where the paraphrase occurs. There are between 3-21 annotations per item. Generally, an item has more annotations if annotators disagreed more.

Data is tab separated and looks like

QID	Annotator	Session	Is Paraphrase	Guest Tokens	Guest Highlights	Host Tokens	Host Highlights
CNN-177596-7	PROLIFIC_1	R_2PoEZfAptrkFdsx	0	This is not good.	[0, 0, 0, 0]	This is what you don't want happening with your menorah, folks.	[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
CNN-177596-7	PROLIFIC_2	R_3HCjJuW3mB9PQpL	1	This is not good.	[1, 1, 1, 1]	This is what you don't want happening with your menorah, folks.	[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0]

The dataset is shared with a research-only license. See GitHub and arxiv for more details.

You might want to get the votes for one question id:

from datasets import load_dataset
import pandas as pd
import ast

# Load the dataset
dataset = load_dataset("AnnaWegmann/Paraphrases-in-Interviews")

# Convert the dataset to a pandas DataFrame
split_names = list(dataset.keys())
dataframes = [dataset[split].to_pandas() for split in split_names]
df = pd.concat(dataframes, ignore_index=True)  # if you just need one split: dataset['train'].to_pandas()

# Specify the QID to process (e.g., 'CNN-177596-7'), it has to be in the correct split
qid = 'CNN-177596-7'

# Filter the DataFrame for the specified QID
group = df[df['QID'] == qid]

# Compute total votes and paraphrase votes
total_votes = len(group)
paraphrase_votes = group['Is Paraphrase'].astype(int).sum()

# Process Guest Highlights
guest_highlights_list = group['Guest Highlights'].apply(ast.literal_eval).tolist()
guest_highlights_sums = [sum(x) for x in zip(*guest_highlights_list)]

# Process Host Highlights
host_highlights_list = group['Host Highlights'].apply(ast.literal_eval).tolist()
host_highlights_sums = [sum(x) for x in zip(*host_highlights_list)]

# Output the results
print(f"QID: {qid}")
print(f"Total Votes: {total_votes}")
print(f"Paraphrase Votes: {paraphrase_votes}\n")

print("Guest Tokens:")
print(group['Guest Tokens'].iloc[0])
print("Guest Highlights Counts:")
print(guest_highlights_sums, "\n")

print("Host Tokens:")
print(group['Host Tokens'].iloc[0])
print("Host Highlights Counts:")
print(host_highlights_sums)

should return

QID: CNN-177596-7
Total Votes: 20
Paraphrase Votes: 10

Guest Tokens:
This is not good.
Guest Highlights Counts:
[10, 9, 9, 9]

Host Tokens:
This is what you don't want happening with your menorah, folks.
Host Highlights Counts:
[9, 8, 9, 9, 9, 9, 9, 7, 7, 7, 1]

For comments or questions reach out to Anna (a.m.wegmann @ uu.nl) or raise an issue on GitHub.

If you find this dataset helpful, consider citing our paper:

@inproceedings{wegmann-etal-2024-whats,
    title = "What{'}s Mine becomes Yours: Defining, Annotating and Detecting Context-Dependent Paraphrases in News Interview Dialogs",
    author = "Wegmann, Anna  and
      Broek, Tijs A. Van Den  and
      Nguyen, Dong",
    editor = "Al-Onaizan, Yaser  and
      Bansal, Mohit  and
      Chen, Yun-Nung",
    booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.emnlp-main.52",
    pages = "882--912",
    abstract = "Best practices for high conflict conversations like counseling or customer support almost always include recommendations to paraphrase the previous speaker. Although paraphrase classification has received widespread attention in NLP, paraphrases are usually considered independent from context, and common models and datasets are not applicable to dialog settings. In this work, we investigate paraphrases across turns in dialog (e.g., Speaker 1: {``}That book is mine.{''} becomes Speaker 2: {``}That book is yours.{''}). We provide an operationalization of context-dependent paraphrases, and develop a training for crowd-workers to classify paraphrases in dialog. We introduce ContextDeP, a dataset with utterance pairs from NPR and CNN news interviews annotated for context-dependent paraphrases. To enable analyses on label variation, the dataset contains 5,581 annotations on 600 utterance pairs. We present promising results with in-context learning and with token classification models for automatic paraphrase detection in dialog.",
}