Seongyun commited on
Commit
844f2b6
1 Parent(s): fd55759

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -30,17 +30,17 @@ pipeline_tag: text-generation
30
  Janus is a model trained using [Mistral-7B-v0.2](https://huggingface.co/mistral-community/Mistral-7B-v0.2) as its base model. Janus has been trained on [Multifaceted Collection](https://huggingface.co/datasets/kaist-ai/Multifaceted-Collection-SFT), a preference dataset containing 196k unique system messages for aligning LLMs to diverse human preferences. Janus not only excels at generating personalized responses that cater to various human preferences but is also adept at producing responses that are generally preferred for being helpful and harmless.
31
 
32
  # Model Details
33
- Janus-DPO-7B is a model created by applying DPO to Janus-66k-7B using the Multifaceted-Collection-DPO.
34
 
35
  ## Model Description
36
 
37
  - **Model type:** Language model
38
  - **Language(s) (NLP):** English
39
  - **License:** Apache 2.0
40
- - **Related Models:** [Janus-66k-7B]() [Janus-7B](), [Janus-ORPO-7B](), [Janus-RM-7B]()
41
  - **Training Datasets**: [Multifaceted-Collection-SFT](https://huggingface.co/datasets/kaist-ai/Multifaceted-Collection-SFT)
42
  - **Resources for more information:**
43
- - [Research paper]()
44
  - [GitHub Repo](https://github.com/kaistAI/Janus)
45
 
46
  # Usage
 
30
  Janus is a model trained using [Mistral-7B-v0.2](https://huggingface.co/mistral-community/Mistral-7B-v0.2) as its base model. Janus has been trained on [Multifaceted Collection](https://huggingface.co/datasets/kaist-ai/Multifaceted-Collection-SFT), a preference dataset containing 196k unique system messages for aligning LLMs to diverse human preferences. Janus not only excels at generating personalized responses that cater to various human preferences but is also adept at producing responses that are generally preferred for being helpful and harmless.
31
 
32
  # Model Details
33
+ Janus-DPO-7B is a model created by applying DPO to Janus using the [Multifaceted-Collection-DPO](https://huggingface.co/datasets/kaist-ai/Multifaceted-Collection-DPO).
34
 
35
  ## Model Description
36
 
37
  - **Model type:** Language model
38
  - **Language(s) (NLP):** English
39
  - **License:** Apache 2.0
40
+ - **Related Models:** [Janus-7B](https://huggingface.co/kaist-ai/janus-7b), [Janus-ORPO-7B](https://huggingface.co/kaist-ai/janus-orpo-7b), [Janus-RM-7B](https://huggingface.co/kaist-ai/janus-rm-7b)
41
  - **Training Datasets**: [Multifaceted-Collection-SFT](https://huggingface.co/datasets/kaist-ai/Multifaceted-Collection-SFT)
42
  - **Resources for more information:**
43
+ - [Research paper](https://arxiv.org/abs/2405.17977)
44
  - [GitHub Repo](https://github.com/kaistAI/Janus)
45
 
46
  # Usage