elife dataset statistics

#1
by Ewanwong - opened

Hi, there. I find your datasets quite interesting and I'm currently working on them. However, when I calculate the statistics of the datasets, the result I got is different from what you present in paper. For example, in the elife dataset the average length of article/summary I got are 10133.07/382.69, and the average number of sentences in summary is 17.97, which diverge pretty much from your paper. I use split by blank space to calculate the number of words and nltk to tokenize sentences. Do you have any idea why this happens?

Hi, Yifan - thank you for your interest in our work. For the statistics in our paper, we only consider the tokens containing letters when we calculate the lengths of our articles/summaries (so as to approximate the number of actual words). That is, we split by blank space, and then filter the resulting tokens using re.search('[a-zA-Z]', token) to retain only those likely to be words. We tokenize sentences using the PySBD rule-based parser, as mentioned in the paper. Hope this is helpful!

Now I see. Very helpful, thank you so much for the explanation!

tomasg25 changed discussion status to closed

Sign up or log in to comment