Any Discussion in the original paper

#3
by jAEhEEkIM - opened

Hello! Thank you for sharing this valuable dataset.

I have a question about the data construction and analysis.

I read your original paper( Understanding Dataset Difficulty with V-Usable Information (ICML 2022))

But i cannot find any mentions about the dataset.

Am i missing something or is the dataset created after the paper?

Stanford NLP org

The dataset was created after the paper, but using a fairly straightforward application of the methods in the paper. Essentially:

  1. We first calculated the v-info of the unfiltered dataset (i..e, if comment A has a higher score than comment B, infer that A is preferred to B). We used both T5 and GPT-3-curie as the model family V.
  2. We found that the v-info of the unfiltered dataset was 0, meaning that the inferred preferences were unlearnable. By plotting the distribution of the PVI values, we saw that the high-PVI examples (i.e., the easiest to learn) were all pairs where the higher scoring comment was written after the lower scoring one. This made sense, since comments written earlier have more time to accrue votes, so there is a confound if the higher scoring comment was also written earlier.
  3. We filtered the data to only include pairs (A,B) if score(A) > score(B) and date(A) > date(B). We applied some other filters based on a similar idea or to avoid having toxic data.

The resulting dataset is SHP and it has comparable v-info to datasets like anthropic-HH (helpful) when the model family V is T5-xl.

thanks for your kindful and detailed reply!

can i see any result that you mentioned in specific paper?

I wonder whether the fully human annotated data(e.g. SHP) could be beneficial as model generated data(e.g. anthropic-HH)

Stanford NLP org

if you can send me an email (kawin@stanford.edu), i can share a draft of a paper that generalizes the methodology we used to create SHP (we don't really discuss the details of how we created SHP 1.0 specifically, but you'll get a good sense of what was done)

I wonder whether the fully human annotated data(e.g. SHP) could be beneficial as model generated data(e.g. anthropic-HH)

It's worth noting that in most cases, the data is off-policy, so it doesn't matter whether you use DPO/KTO with machine-generated data or human-generated data, since neither are generated by the model you're aligning. It's hard to make a comparison between datasets where the prompt distribution is different.

thanks for your detailed reply again!

i really hope to read the draft as well.

I think SHP-style(fully human generated data) could be another strategy for scaling human preference data (like meta did recently by hand crafting).

I will touch you by the email!

Sign up or log in to comment