ilu000 commited on
Commit
4e28bd6
1 Parent(s): 5f954b3

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ library_name: transformers
5
+ license: apache-2.0
6
+ tags:
7
+ - gpt
8
+ - llm
9
+ - large language model
10
+ - h2o-llmstudio
11
+ thumbnail: >-
12
+ https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
13
+ widget:
14
+ - messages:
15
+ - role: user
16
+ content: Why is drinking water so healthy?
17
+ pipeline_tag: text-generation
18
+ ---
19
+ # Model Card
20
+ ## Summary
21
+
22
+ h2o-danube2-1.8b-chat is a chat fine-tuned model by H2O.ai with 1.8 billion parameters. We release three versions of this model:
23
+
24
+ | Model Name | Description |
25
+ |:-----------------------------------------------------------------------------------|:----------------|
26
+ | [h2oai/h2o-danube2-1.8b-base](https://huggingface.co/h2oai/h2o-danube2-1.8b-base) | Base model |
27
+ | [h2oai/h2o-danube2-1.8b-sft](https://huggingface.co/h2oai/h2o-danube2-1.8b-sft) | SFT tuned |
28
+ | [h2oai/h2o-danube2-1.8b-chat](https://huggingface.co/h2oai/h2o-danube2-1.8b-chat) | SFT + DPO tuned |
29
+
30
+ This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
31
+
32
+ ## Model Architecture
33
+
34
+ We adjust the Llama 2 architecture for a total of around 1.8b parameters. For details, please refer to our [Technical Report](https://arxiv.org/abs/2401.16818). We use the Mistral tokenizer with a vocabulary size of 32,000 and train our model up to a context length of 8,192.
35
+
36
+ The details of the model architecture are:
37
+
38
+ | Hyperparameter | Value |
39
+ |:----------------|:-------|
40
+ | n_layers | 24 |
41
+ | n_heads | 32 |
42
+ | n_query_groups | 8 |
43
+ | n_embd | 2560 |
44
+ | vocab size | 32000 |
45
+ | sequence length | 8192 |
46
+
47
+ ## Usage
48
+
49
+ To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
50
+
51
+ ```bash
52
+ pip install transformers>=4.39.3
53
+ ```
54
+
55
+ ```python
56
+ import torch
57
+ from transformers import pipeline
58
+
59
+ pipe = pipeline(
60
+ "text-generation",
61
+ model="h2oai/h2o-danube2-1.8b-chat",
62
+ torch_dtype=torch.bfloat16,
63
+ device_map="auto",
64
+ )
65
+
66
+ # We use the HF Tokenizer chat template to format each message
67
+ # https://huggingface.co/docs/transformers/main/en/chat_templating
68
+ messages = [
69
+ {"role": "user", "content": "Why is drinking water so healthy?"},
70
+ ]
71
+ prompt = pipe.tokenizer.apply_chat_template(
72
+ messages,
73
+ tokenize=False,
74
+ add_generation_prompt=True,
75
+ )
76
+ res = pipe(
77
+ prompt,
78
+ max_new_tokens=256,
79
+ )
80
+ print(res[0]["generated_text"])
81
+ ```
82
+
83
+ ## Quantization and sharding
84
+
85
+ You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
86
+
87
+ ## Model Architecture
88
+
89
+ ```
90
+ MistralForCausalLM(
91
+ (model): MistralModel(
92
+ (embed_tokens): Embedding(32000, 2560, padding_idx=0)
93
+ (layers): ModuleList(
94
+ (0-23): 24 x MistralDecoderLayer(
95
+ (self_attn): MistralAttention(
96
+ (q_proj): Linear(in_features=2560, out_features=2560, bias=False)
97
+ (k_proj): Linear(in_features=2560, out_features=640, bias=False)
98
+ (v_proj): Linear(in_features=2560, out_features=640, bias=False)
99
+ (o_proj): Linear(in_features=2560, out_features=2560, bias=False)
100
+ (rotary_emb): MistralRotaryEmbedding()
101
+ )
102
+ (mlp): MistralMLP(
103
+ (gate_proj): Linear(in_features=2560, out_features=6912, bias=False)
104
+ (up_proj): Linear(in_features=2560, out_features=6912, bias=False)
105
+ (down_proj): Linear(in_features=6912, out_features=2560, bias=False)
106
+ (act_fn): SiLU()
107
+ )
108
+ (input_layernorm): MistralRMSNorm()
109
+ (post_attention_layernorm): MistralRMSNorm()
110
+ )
111
+ )
112
+ (norm): MistralRMSNorm()
113
+ )
114
+ (lm_head): Linear(in_features=2560, out_features=32000, bias=False)
115
+ )
116
+ ```
117
+
118
+ ## Benchmarks
119
+
120
+ ### 🤗 Open LLM Leaderboard
121
+
122
+ | Benchmark | acc_n |
123
+ |:--------------|:--------:|
124
+ | Average | 48.44 |
125
+ | ARC-challenge | 43.43 |
126
+ | Hellaswag | 73.54 |
127
+ | MMLU | 37.77 |
128
+ | TruthfulQA | 39.96 |
129
+ | Winogrande | 69.77 |
130
+ | GSM8K | 26.16 |
131
+
132
+ ### MT-Bench
133
+
134
+ ```
135
+ First Turn: 6.23
136
+ Second Turn: 5.34
137
+ Average: 5.79
138
+ ```
139
+
140
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/636d18755aaed143cd6698ef/LpqAu18h3q88TpVaHwxC6.png)
141
+
142
+ ## Disclaimer
143
+
144
+ Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
145
+
146
+ - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
147
+ - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
148
+ - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
149
+ - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
150
+ - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
151
+ - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
152
+
153
+ By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "h2oai/h2o-danube2-1.8b-chat",
3
+ "architectures": [
4
+ "MistralForCausalLM"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 1,
8
+ "eos_token_id": 2,
9
+ "hidden_act": "silu",
10
+ "hidden_size": 2560,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 6912,
13
+ "max_position_embeddings": 8192,
14
+ "model_type": "mistral",
15
+ "num_attention_heads": 32,
16
+ "num_hidden_layers": 24,
17
+ "num_key_value_heads": 8,
18
+ "pad_token_id": 0,
19
+ "rms_norm_eps": 1e-05,
20
+ "rope_theta": 10000,
21
+ "sliding_window": null,
22
+ "tie_word_embeddings": false,
23
+ "torch_dtype": "bfloat16",
24
+ "transformers_version": "4.38.2",
25
+ "use_cache": true,
26
+ "vocab_size": 32000
27
+ }
generation_config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "pad_token_id": 0,
6
+ "repetition_penalty": 1.1,
7
+ "transformers_version": "4.38.2"
8
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6470e05a11d3ca8f04b91101d779717d5680940a5b94c68f6f05bddb1d913d90
3
+ size 3662427808
special_tokens_map.json ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "pad_token": {
24
+ "content": "<unk>",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "sep_token": {
31
+ "content": "</s>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "unk_token": {
38
+ "content": "<unk>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ }
44
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dadfd56d766715c61d2ef780a525ab43b8e6da4de6865bda3d95fdef5e134055
3
+ size 493443
tokenizer_config.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_eos_token": false,
4
+ "add_prefix_space": true,
5
+ "added_tokens_decoder": {
6
+ "0": {
7
+ "content": "<unk>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false,
12
+ "special": true
13
+ },
14
+ "1": {
15
+ "content": "<s>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false,
20
+ "special": true
21
+ },
22
+ "2": {
23
+ "content": "</s>",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false,
28
+ "special": true
29
+ }
30
+ },
31
+ "additional_special_tokens": [],
32
+ "bos_token": "<s>",
33
+ "chat_template": "{% for message in messages %}{% if message['role'] == 'user' %}{{ '<|prompt|>' + message['content'] + eos_token }}{% elif message['role'] == 'system' %}{{ raise_exception('System role not supported') }}{% elif message['role'] == 'assistant' %}{{ '<|answer|>' + message['content'] + eos_token }}{% endif %}{% if loop.last and add_generation_prompt %}{{ '<|answer|>' }}{% endif %}{% endfor %}",
34
+ "clean_up_tokenization_spaces": false,
35
+ "cls_token": "</s>",
36
+ "eos_token": "</s>",
37
+ "legacy": true,
38
+ "model_max_length": 1000000000000000019884624838656,
39
+ "pad_token": "<unk>",
40
+ "sep_token": "</s>",
41
+ "sp_model_kwargs": {},
42
+ "spaces_between_special_tokens": false,
43
+ "tokenizer_class": "LlamaTokenizer",
44
+ "unk_token": "<unk>",
45
+ "use_default_system_prompt": false
46
+ }