File size: 5,658 Bytes
4648a5d a79bd26 4648a5d 560c4b5 bd1e59f 4648a5d 560c4b5 f19c614 560c4b5 f19c614 560c4b5 993851c e227f45 993851c e227f45 993851c 6e23566 0997e94 993851c 5602733 993851c 4648a5d e4e9018 be5f44d c474e00 e4e9018 4309e3a e4e9018 4648a5d e4e9018 4648a5d e4e9018 5ee889c e4e9018 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 |
---
base_model: alpindale/Mistral-7B-v0.2-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: janus-7b
results: []
license: apache-2.0
language:
- en
datasets:
- kaist-ai/Multifaceted-Collection-SFT
library_name: transformers
pipeline_tag: text-generation
---
## Links for Reference
- **Homepage: https://lklab.kaist.ac.kr/Janus/**
- **Repository: https://github.com/kaistAI/Janus**
- **Paper: https://arxiv.org/abs/2405.17977**
- **Point of Contact: seongyun@kaist.ac.kr**
# TL; DR
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6550c4f27bbfce1878f5f280/vrQl8D8FV3vqUJYbPgsiG.png)
Janus is a model trained using [Mistral-7B-v0.2](https://huggingface.co/mistral-community/Mistral-7B-v0.2) as its base model. Janus has been trained on [Multifaceted Collection](https://huggingface.co/datasets/kaist-ai/Multifaceted-Collection-SFT), a preference dataset containing 196k unique system messages for aligning LLMs to diverse human preferences. Janus not only excels at generating personalized responses that cater to various human preferences but is also adept at producing responses that are generally preferred for being helpful and harmless.
# Model Details
Janus-7B is a model created by supervised fine-tuning using all 196k entries of the training data from the Multifaceted-Collection.
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Related Models:** [Janus-DPO-7B](https://huggingface.co/kaist-ai/janus-dpo-7b), [Janus-ORPO-7B](https://huggingface.co/kaist-ai/janus-orpo-7b), [Janus-RM-7B](https://huggingface.co/kaist-ai/janus-rm-7b)
- **Training Datasets**: [Multifaceted-Collection-SFT](https://huggingface.co/datasets/kaist-ai/Multifaceted-Collection-SFT)
- **Resources for more information:**
- [Research paper](https://arxiv.org/abs/2405.17977)
- [GitHub Repo](https://github.com/kaistAI/Janus)
# Usage
Janus is a model generalized for various system messages, allowing users to control the model's response by inputting the desired system message. The input prompt format is as follows:
```
[INST]{system_message}\n{instruction}[/INST]
```
Additionally, an example of the inference code applying this is as follows:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_name = "kaist-ai/janus-7b"
device = "cuda:0"
# Load the model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
dtype = "float16"
if torch.cuda.is_bf16_supported():
dtype = "bfloat16"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=getattr(torch, dtype))
model.eval()
model.to(device)
# Prepare inputs
system = "As a financial news headline writer with a flair for the dramatic, you have taken on the role of crafting compelling headlines about the integration of AI into the financial sector. Your expertise allows you to weave industry-specific terminology seamlessly into each headline, striking a balance between capturing attention and providing meaningful insights into the transformative benefits of AI in finance. With each headline, you focus on elucidating the key advantages AI brings to financial operations, making complex information accessible and immediately impactful. While your headlines are designed to engage and inform an audience of finance and technology professionals, you navigate the fine line of excitement and accuracy with care, ensuring that the promises made are grounded in reality, thus avoiding any form of sensationalism. Your mission is to distill the essence of AI's impact on finance into a single, powerful line that speaks volumes to the informed reader."
prompt = "Write a headline for an article about the benefits of using AI in the finance sector."
def apply_template_mistral_instruct(system_message, content):
prompt = f"{system_message}\n{content}".strip()
return f"[INST] {prompt} [/INST] "
input_str = apply_template_mistral_instruct(system, prompt)
input_ids = tokenizer.encode(input_str, return_tensors="pt")
print(input_str)
model_inputs = input_ids.to(device)
# Generate text
output_ids = model.generate(model_inputs, max_new_tokens=1024)
decoded = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
print(decoded[0][len(input_str):])
# Revolutionary Trends: How AI Is Redefining Efficiency and Accuracy in the Financial Realm
```
To train Janus and evaluate the responses it generates, please refer to the [GitHub Repo](https://github.com/kaistAI/Janus).
Additionally, refer to the [Multifaceted Bench](https://huggingface.co/datasets/kaist-ai/Multifaceted-Bench), which evaluates how well LLM generates personalized responses.
# Training Details
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 4
## Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.0
# Citation
If you find the following model helpful, please consider citing our paper!
**BibTeX:**
```bibtex
@misc{lee2024aligning,
title={Aligning to Thousands of Preferences via System Message Generalization},
author={Seongyun Lee and Sue Hyun Park and Seungone Kim and Minjoon Seo},
year={2024},
eprint={2405.17977},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |