Dataset Card for "shp_filtered_MMPO"
This is the filtered version of SHP dataset, which was used to train MMPO, as introduced in the paper below:
Margin Matching Preference Optimization: Enhanced Model Alignment with Granular Feedback
Kyuyoung Kim*, Ah Jeong Seo*, Hao Liu, Jinwoo Shin, Kimin Lee
In EMNLP 2024 Findings
Dataset Description
The original SHP dataset consists of 385k collective human preferences over responses to questions/instructions in 18 different subject areas, from cooking to legal advice. To create SHP filtered dataset to train MMPO, we extracted a subset of size 55k, following Ethayarajh et al. (2022) and Sun et al. (2023).
However, we sampled uniformly across score differences to evaluate the methods over a wide range of quality margins, which is different from prior works that trained only on preferences with significant score differences. Upon analyzing the distribution of score differences in the SHP dataset, we found that 50% of the data had relatively small differences. Therefore, to check if the model can be optimized effectively with datasets containing many preferences with low confidence, we employed a method of sampling proportional to the score difference distribution of the original SHP.
More details can be found in the paper referenced above. Additionally, you can find more filtering details about the SHP dataset in the official code.