sequelbox commited on
Commit
0124fb8
1 Parent(s): 1f75d39

4decf441510200d7a383a87168f9214256edfd2bdf67ca35d6fc6bd17dad0b63

Browse files
.ipynb_checkpoints/README-checkpoint.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - shining-valiant
7
+ - valiant
8
+ - valiant-labs
9
+ - llama
10
+ - llama-2
11
+ - llama-2-chat
12
+ - 70b
13
+ model_type: llama
14
+ license: llama2
15
+ ---
16
+
17
+
18
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/5rUJPhu_6LyDvSQogSVhk.jpeg)
19
+
20
+
21
+ Shining Valiant is a chat model built on the Llama 2 architecture, finetuned on our data for insight, creativity, passion, and friendliness.
22
+ - Uses the llama-2-70b-chat model, with safetensors
23
+ - Finetuned on multiple runs across private and public data
24
+ - Data focused on knowledge, enthusiasm, and structured reasoning
25
+
26
+ ## Version
27
+
28
+ The current version is **1.3!**
29
+
30
+ We're thrilled to bring you our newest release!
31
+
32
+ Previous versions remain available in the repository. New models will be released for everyone once our team's training and validation process is complete.
33
+
34
+ ## Evaluation
35
+
36
+ | Model | Avg | ARC | HS | MMLU | TQA |
37
+ |-----------------------|--------|-------|-------|--------|-------|
38
+ | **Shining Valiant 1.2** | 74.17 | 72.95 | 87.88 | 70.97 | 64.88 |
39
+ | Llama 2 | 67.35 | 67.32 | 87.33 | 69.83 | 44.92 |
40
+ | Llama 2 Chat | 66.80 | 64.59 | 85.88 | 63.91 | 52.80 |
41
+
42
+ **Shining Valiant 1.3** is awaiting full results from the Open LLM Leaderboard.
43
+
44
+ SV 1.3 outperformed SV 1.2 on our internal testing.
45
+
46
+ ## Prompting Guide
47
+ Shining Valiant uses the same prompt format as Llama 2 Chat - feel free to use your existing prompts and scripts!
48
+ A few examples of different formats:
49
+
50
+ 1. [INST] Good morning! Can you let me know how to parse a text file and turn the semicolons into commas? [/INST]
51
+
52
+ 2. [INST] (You are an intelligent, helpful AI assistant.) Hello, can you write me a thank you letter? [/INST]
53
+
54
+ 3. [INST] << SYS >>You are an intelligent, helpful AI assistant.<< /SYS >>Deep dive about a country with interesting history: [/INST]
55
+
56
+ ## The Model
57
+ Shining Valiant is built on top of Sunset Boulevard, which uses Llama 2's 70b parameter architecture and features upgraded general capability.
58
+
59
+ From there, we've created Shining Valiant through multiple finetuning runs on different compositions of our private dataset.
60
+
61
+ Our private data focuses primarily on applying Shining Valiant's personality: she's friendly, enthusiastic, insightful, knowledgeable, and loves to learn!
62
+
63
+ We are actively working on expanding and improving the Shining Valiant dataset for use in future releases of this model and others.
64
+
65
+
66
+
67
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63444f2687964b331809eb55/VCJ8Fmefd8cdVhXSSxJiD.jpeg)
68
+
69
+
70
+ Shining Valiant is created by [Valiant Labs.](http://valiantlabs.ca/)
71
+
72
+ [Follow us on X for updates on our models!](https://twitter.com/valiant_labs)
73
+
74
+ We care about open source.
75
+ For everyone to use.
76
+
77
+ We encourage others to finetune further from our models.
.ipynb_checkpoints/config-checkpoint.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "./SunsetBoulevard/",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "attention_bias": false,
7
+ "bos_token_id": 1,
8
+ "eos_token_id": 2,
9
+ "hidden_act": "silu",
10
+ "hidden_size": 8192,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 28672,
13
+ "max_position_embeddings": 4096,
14
+ "model_type": "llama",
15
+ "num_attention_heads": 64,
16
+ "num_hidden_layers": 80,
17
+ "num_key_value_heads": 8,
18
+ "pad_token_id": 0,
19
+ "pretraining_tp": 1,
20
+ "rms_norm_eps": 1e-05,
21
+ "rope_scaling": null,
22
+ "rope_theta": 10000.0,
23
+ "tie_word_embeddings": false,
24
+ "torch_dtype": "float32",
25
+ "transformers_version": "4.35.0",
26
+ "use_cache": false,
27
+ "vocab_size": 32000
28
+ }
.ipynb_checkpoints/config-checkpoint.py ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "ValiantLabs/ShiningValiant",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "attention_bias": false,
7
+ "bos_token_id": 1,
8
+ "eos_token_id": 2,
9
+ "hidden_act": "silu",
10
+ "hidden_size": 8192,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 28672,
13
+ "max_position_embeddings": 4096,
14
+ "model_type": "llama",
15
+ "num_attention_heads": 64,
16
+ "num_hidden_layers": 80,
17
+ "num_key_value_heads": 8,
18
+ "pad_token_id": 0,
19
+ "pretraining_tp": 1,
20
+ "rms_norm_eps": 1e-05,
21
+ "rope_scaling": null,
22
+ "rope_theta": 10000.0,
23
+ "tie_word_embeddings": false,
24
+ "torch_dtype": "float32",
25
+ "transformers_version": "4.35.0",
26
+ "use_cache": false,
27
+ "vocab_size": 32000
28
+ }
.ipynb_checkpoints/generation_config-checkpoint.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "pad_token_id": 0,
6
+ "transformers_version": "4.35.0",
7
+ "use_cache": false
8
+ }
.ipynb_checkpoints/special_tokens_map-checkpoint.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<unk>",
4
+ "<s>",
5
+ "</s>"
6
+ ],
7
+ "bos_token": "<s>",
8
+ "eos_token": "</s>",
9
+ "pad_token": "</s>",
10
+ "unk_token": "<unk>"
11
+ }
.ipynb_checkpoints/tokenizer_config-checkpoint.json ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<unk>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<s>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ }
27
+ },
28
+ "additional_special_tokens": [
29
+ "<unk>",
30
+ "<s>",
31
+ "</s>"
32
+ ],
33
+ "bos_token": "<s>",
34
+ "clean_up_tokenization_spaces": false,
35
+ "eos_token": "</s>",
36
+ "legacy": false,
37
+ "max_length": 2048,
38
+ "model_max_length": 1000000000000000019884624838656,
39
+ "pad_token": "</s>",
40
+ "padding_side": "right",
41
+ "sp_model_kwargs": {},
42
+ "stride": 0,
43
+ "tokenizer_class": "LlamaTokenizer",
44
+ "truncation_side": "right",
45
+ "truncation_strategy": "longest_first",
46
+ "unk_token": "<unk>",
47
+ "use_default_system_prompt": true
48
+ }
README.md CHANGED
@@ -25,11 +25,11 @@ Shining Valiant is a chat model built on the Llama 2 architecture, finetuned on
25
 
26
  ## Version
27
 
28
- The current version is **1.2**.
29
 
30
- **Version 1.3** is now **being validated for release!**
31
 
32
- Previous versions remain available in the repository. New models will be released for everyone once our team's training and validation process is complete :)
33
 
34
  ## Evaluation
35
 
@@ -39,6 +39,10 @@ Previous versions remain available in the repository. New models will be release
39
  | Llama 2 | 67.35 | 67.32 | 87.33 | 69.83 | 44.92 |
40
  | Llama 2 Chat | 66.80 | 64.59 | 85.88 | 63.91 | 52.80 |
41
 
 
 
 
 
42
  ## Prompting Guide
43
  Shining Valiant uses the same prompt format as Llama 2 Chat - feel free to use your existing prompts and scripts!
44
  A few examples of different formats:
@@ -50,7 +54,7 @@ A few examples of different formats:
50
  3. [INST] << SYS >>You are an intelligent, helpful AI assistant.<< /SYS >>Deep dive about a country with interesting history: [/INST]
51
 
52
  ## The Model
53
- Shining Valiant is built on top of Stellar Bright, which uses Llama 2's 70b parameter architecture and features upgraded general capability. (Stellar Bright uses public open source data only.)
54
 
55
  From there, we've created Shining Valiant through multiple finetuning runs on different compositions of our private dataset.
56
 
 
25
 
26
  ## Version
27
 
28
+ The current version is **1.3!**
29
 
30
+ We're thrilled to bring you our newest release!
31
 
32
+ Previous versions remain available in the repository. New models will be released for everyone once our team's training and validation process is complete.
33
 
34
  ## Evaluation
35
 
 
39
  | Llama 2 | 67.35 | 67.32 | 87.33 | 69.83 | 44.92 |
40
  | Llama 2 Chat | 66.80 | 64.59 | 85.88 | 63.91 | 52.80 |
41
 
42
+ **Shining Valiant 1.3** is awaiting full results from the Open LLM Leaderboard.
43
+
44
+ SV 1.3 outperformed SV 1.2 on our internal testing.
45
+
46
  ## Prompting Guide
47
  Shining Valiant uses the same prompt format as Llama 2 Chat - feel free to use your existing prompts and scripts!
48
  A few examples of different formats:
 
54
  3. [INST] << SYS >>You are an intelligent, helpful AI assistant.<< /SYS >>Deep dive about a country with interesting history: [/INST]
55
 
56
  ## The Model
57
+ Shining Valiant is built on top of Sunset Boulevard, which uses Llama 2's 70b parameter architecture and features upgraded general capability.
58
 
59
  From there, we've created Shining Valiant through multiple finetuning runs on different compositions of our private dataset.
60
 
config.json CHANGED
@@ -22,7 +22,7 @@
22
  "rope_theta": 10000.0,
23
  "tie_word_embeddings": false,
24
  "torch_dtype": "float32",
25
- "transformers_version": "4.34.0",
26
  "use_cache": false,
27
  "vocab_size": 32000
28
  }
 
22
  "rope_theta": 10000.0,
23
  "tie_word_embeddings": false,
24
  "torch_dtype": "float32",
25
+ "transformers_version": "4.35.0",
26
  "use_cache": false,
27
  "vocab_size": 32000
28
  }
generation_config.json CHANGED
@@ -3,6 +3,6 @@
3
  "bos_token_id": 1,
4
  "eos_token_id": 2,
5
  "pad_token_id": 0,
6
- "transformers_version": "4.34.0",
7
  "use_cache": false
8
  }
 
3
  "bos_token_id": 1,
4
  "eos_token_id": 2,
5
  "pad_token_id": 0,
6
+ "transformers_version": "4.35.0",
7
  "use_cache": false
8
  }
model-00061-of-00061.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c25429c3da71e1db5ad2b6d7cd64f22675a117f9d9967881bba38219ea7d5068
3
+ size 1988198960
model.safetensors.index.json CHANGED
The diff for this file is too large to render. See raw diff