vilm
/

Text Generation
Transformers
PyTorch
English
qwen2
conversational
Inference Endpoints
text-generation-inference
qnguyen3 commited on
Commit
8d2fdf1
1 Parent(s): 35cb3d3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -5,9 +5,10 @@ datasets:
5
  - teknium/OpenHermes-2.5
6
  - LDJnr/Capybara
7
  - Intel/orca_dpo_pairs
8
- - argilla/distilabel-intel-orca-dpo-pairs
9
  language:
10
  - en
 
11
  ---
12
 
13
  # Quyen
@@ -27,7 +28,7 @@ All models were trained with SFT and DPO using the following dataset:
27
 
28
  - *OpenHermes-2.5* by **Teknium**
29
  - *Capyabara* by **LDJ**
30
- - *distilabel-intel-orca-dpo-pairs* by **argilla**
31
  - *orca_dpo_pairs* by **Intel**
32
  - and Private Data by **Ontocord** & **BEE-spoke-data**
33
 
@@ -59,4 +60,4 @@ model.generate(**gen_input)
59
 
60
  # Acknowledgement
61
  - We're incredibly grateful to **Tensoic** and **Ontocord** for their generous support with compute and data preparation.
62
- - We want to say a special thank you to the **Qwen** team for the amazing base model and allowing us to get access to the models early.
 
5
  - teknium/OpenHermes-2.5
6
  - LDJnr/Capybara
7
  - Intel/orca_dpo_pairs
8
+ - argilla/distilabel-capybara-dpo-7k-binarized
9
  language:
10
  - en
11
+ pipeline_tag: text-generation
12
  ---
13
 
14
  # Quyen
 
28
 
29
  - *OpenHermes-2.5* by **Teknium**
30
  - *Capyabara* by **LDJ**
31
+ - *argilla/distilabel-capybara-dpo-7k-binarized* by **argilla**
32
  - *orca_dpo_pairs* by **Intel**
33
  - and Private Data by **Ontocord** & **BEE-spoke-data**
34
 
 
60
 
61
  # Acknowledgement
62
  - We're incredibly grateful to **Tensoic** and **Ontocord** for their generous support with compute and data preparation.
63
+ - Special thanks to the Qwen team for letting us access the models early for these amazing finetunes.