How to use it for summarization task

#1
by kashif09 - opened

I am using BART model to fine tune it on CRD3 dataset but i am having problem how to use crd3 dataset, what will be passed as input dialogues, the format in turns key is little odd with large number of turns and each turn being so large , how to preprocess the turns before passing to BART model

kashif09 changed discussion title from How to use it for summarizatio task to How to use it for summarization task
Datasets Maintainers org

Hi ! You can probably preprocess the dataset using map to split each turn:

def split_turns(batch):
    return {
        "utterance": [utterance for turns in batch["turns"] for turn in turns for utterance in turn["utterances"]]
    }

ds = ds.map(split_turns, batched=True, remove_columns=ds.column_names)

so should i join the elements of this list of utterance to give a single text inputdialogue for the whole chunk or pass each element with same repeating chunk as separate data point?

Datasets Maintainers org

As you want, it depends if you want your model to be able to generate dialogues or simply to generate turns or utterances. To get the dialogues you can join the turns and utterances together using the separators you want.

I think there is also a "names" field you can use if you want to include the names of the speakers in your input dialogues

thanks. also one more question when i load the dataset why does the train,validation and test each of them contains 52796 data examples, but in the paper split was different
also how to use chunk_id when processing the data the way you said to join utterances in turns with separating symbol

Datasets Maintainers org

Oh it looks like there's an error in the dataset: each split seems to contain the same examples.
I opened a PR here to fix this: https://github.com/huggingface/datasets/pull/4705

Until the PR is merged, you can already use the fix by passing revision="fix-crd3" to load_dataset(...)

Datasets Maintainers org

To split the turns but join the utterances of a same turn together:

def split_turns(batch, sep=" "):
    return {"turn": [sep.join(turn["utterances"]) for turns in batch["turns"] for turn in turns]}

ds = ds.map(split_turns, batched=True, remove_columns=ds.column_names)
Datasets Maintainers org

PR is merged, you can pas revision="main" to load_dataset(...)

Sign up or log in to comment