alvarobartt HF staff commited on
Commit
28aa8b7
1 Parent(s): fa5dba5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -2
README.md CHANGED
@@ -18,6 +18,13 @@ license: apache-2.0
18
 
19
  # Model Card for Notus 7B
20
 
 
 
 
 
 
 
 
21
  Notus is going to be a collection of fine-tuned models using DPO, similarly to Zephyr, but mainly focused
22
  on the Direct Preference Optimization (DPO) step, aiming to incorporate preference feedback into the LLMs
23
  when fine-tuning those. Notus models are intended to be used as assistants via chat-like applications, and
@@ -26,8 +33,6 @@ also using DPO.
26
 
27
  ## Model Details
28
 
29
- # notus-7b-dpo
30
-
31
  ### Model Description
32
 
33
  - **Developed by:** Argilla, Inc. (based on HuggingFace H4 and MistralAI previous efforts and amazing work)
 
18
 
19
  # Model Card for Notus 7B
20
 
21
+ <div align="center">
22
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/60f0608166e5701b80ed3f02/LU-vKiC0R7UxxITrwE1F_.png"/>
23
+ <p style="text-align: center;">
24
+ Image was artificially generated by Dalle-3 via ChatGPT Pro
25
+ </p>
26
+ </div>
27
+
28
  Notus is going to be a collection of fine-tuned models using DPO, similarly to Zephyr, but mainly focused
29
  on the Direct Preference Optimization (DPO) step, aiming to incorporate preference feedback into the LLMs
30
  when fine-tuning those. Notus models are intended to be used as assistants via chat-like applications, and
 
33
 
34
  ## Model Details
35
 
 
 
36
  ### Model Description
37
 
38
  - **Developed by:** Argilla, Inc. (based on HuggingFace H4 and MistralAI previous efforts and amazing work)