davanstrien HF staff commited on
Commit
b40cf90
β€’
1 Parent(s): d92bc63

Update app.py overview with ORPO paper link and revised description

Browse files
Files changed (1) hide show
  1. app.py +5 -5
app.py CHANGED
@@ -113,13 +113,13 @@ collections = create_update_collections(datasets)
113
  languages = list(datasets.keys())
114
 
115
  overview = """
116
- This Space shows an overview of Direct Preference Optimization (DPO) datasets available on the Hugging Face Hub across different languages.
 
117
 
118
- Recently [Odds Ratio Preference Optimization](https://huggingface.co/papers/2403.07691) ORPO has been demonstrated to be a powerful tool for training better performing language models.
119
-
120
- - ORPO training can be done using DPO style datasets
121
  - Is a key ingredient for training better models for every language having enough DPO datasets for different languages?
122
- - This Space aims to track the number DPO datasets are available for different languages and how many datasets are available for each language!"""
 
123
 
124
  dpo = """
125
  #### What is Direct Preference Optimization (DPO)?
 
113
  languages = list(datasets.keys())
114
 
115
  overview = """
116
+ This Space shows an overview of preference datasets, in particular DPO style datasets, available on the Hugging Face Hub across different languages
117
+ Recently, [Odds Ratio Preference Optimization](https://huggingface.co/papers/2403.07691) ORPO has been demonstrated to be a powerful tool for training better performing language models directly from preference datasets.
118
 
119
+ - ORPO can be done using DPO style datasets
 
 
120
  - Is a key ingredient for training better models for every language having enough DPO datasets for different languages?
121
+ - This Space aims to track the number DPO datasets available on the Hugging Face Hub for different languages.
122
+ """
123
 
124
  dpo = """
125
  #### What is Direct Preference Optimization (DPO)?