Update README.md
Browse files
README.md
CHANGED
@@ -5,9 +5,10 @@ datasets:
|
|
5 |
- teknium/OpenHermes-2.5
|
6 |
- LDJnr/Capybara
|
7 |
- Intel/orca_dpo_pairs
|
8 |
-
- argilla/distilabel-
|
9 |
language:
|
10 |
- en
|
|
|
11 |
---
|
12 |
|
13 |
# Quyen
|
@@ -27,7 +28,7 @@ All models were trained with SFT and DPO using the following dataset:
|
|
27 |
|
28 |
- *OpenHermes-2.5* by **Teknium**
|
29 |
- *Capyabara* by **LDJ**
|
30 |
-
- *distilabel-
|
31 |
- *orca_dpo_pairs* by **Intel**
|
32 |
- and Private Data by **Ontocord** & **BEE-spoke-data**
|
33 |
|
@@ -58,4 +59,5 @@ model.generate(**gen_input)
|
|
58 |
- Coming Soon! We will update the benchmarks later
|
59 |
|
60 |
# Acknowledgement
|
61 |
-
- We're incredibly grateful to **Tensoic** and **Ontocord** for their generous support with compute and data preparation.
|
|
|
|
5 |
- teknium/OpenHermes-2.5
|
6 |
- LDJnr/Capybara
|
7 |
- Intel/orca_dpo_pairs
|
8 |
+
- argilla/distilabel-capybara-dpo-7k-binarized
|
9 |
language:
|
10 |
- en
|
11 |
+
pipeline_tag: text-generation
|
12 |
---
|
13 |
|
14 |
# Quyen
|
|
|
28 |
|
29 |
- *OpenHermes-2.5* by **Teknium**
|
30 |
- *Capyabara* by **LDJ**
|
31 |
+
- *argilla/distilabel-capybara-dpo-7k-binarized* by **argilla**
|
32 |
- *orca_dpo_pairs* by **Intel**
|
33 |
- and Private Data by **Ontocord** & **BEE-spoke-data**
|
34 |
|
|
|
59 |
- Coming Soon! We will update the benchmarks later
|
60 |
|
61 |
# Acknowledgement
|
62 |
+
- We're incredibly grateful to **Tensoic** and **Ontocord** for their generous support with compute and data preparation.
|
63 |
+
- Special thanks to the Qwen team for letting us access the models early for these amazing finetunes.
|