Text Generation
Transformers
Safetensors
English
sparsetral
conversational
custom_code
Inference Endpoints
francislabounty commited on
Commit
dff7fd9
1 Parent(s): b4ab75f

Create configuration_sparsetral.py

Browse files
Files changed (1) hide show
  1. configuration_sparsetral.py +157 -0
configuration_sparsetral.py ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2023 Mistral AI and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """ Sparsetral model configuration"""
16
+
17
+ from transformers.configuration_utils import PretrainedConfig
18
+ from transformers.utils import logging
19
+
20
+
21
+ logger = logging.get_logger(__name__)
22
+
23
+
24
+ class SparsetralConfig(PretrainedConfig):
25
+ r"""
26
+ This is the configuration class to store the configuration of a [`MistralModel`]. It is used to instantiate an
27
+ Mistral model according to the specified arguments, defining the model architecture. Instantiating a configuration
28
+ with the defaults will yield a similar configuration to that of the Mistral-7B-v0.1 or Mistral-7B-Instruct-v0.1.
29
+ [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
30
+ [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
31
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
32
+ documentation from [`PretrainedConfig`] for more information.
33
+ Args:
34
+ vocab_size (`int`, *optional*, defaults to 32000):
35
+ Vocabulary size of the Mistral model. Defines the number of different tokens that can be represented by the
36
+ `inputs_ids` passed when calling [`MistralModel`]
37
+ hidden_size (`int`, *optional*, defaults to 4096):
38
+ Dimension of the hidden representations.
39
+ intermediate_size (`int`, *optional*, defaults to 14336):
40
+ Dimension of the MLP representations.
41
+ num_hidden_layers (`int`, *optional*, defaults to 32):
42
+ Number of hidden layers in the Transformer encoder.
43
+ num_attention_heads (`int`, *optional*, defaults to 32):
44
+ Number of attention heads for each attention layer in the Transformer encoder.
45
+ num_key_value_heads (`int`, *optional*, defaults to 8):
46
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
47
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
48
+ `num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
49
+ converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
50
+ by meanpooling all the original heads within that group. For more details checkout [this
51
+ paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `8`.
52
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
53
+ The non-linear activation function (function or string) in the decoder.
54
+ max_position_embeddings (`int`, *optional*, defaults to `4096*32`):
55
+ The maximum sequence length that this model might ever be used with. Mistral's sliding window attention
56
+ allows sequence of up to 4096*32 tokens.
57
+ initializer_range (`float`, *optional*, defaults to 0.02):
58
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
59
+ rms_norm_eps (`float`, *optional*, defaults to 1e-06):
60
+ The epsilon used by the rms normalization layers.
61
+ use_cache (`bool`, *optional*, defaults to `True`):
62
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
63
+ relevant if `config.is_decoder=True`.
64
+ pad_token_id (`int`, *optional*):
65
+ The id of the padding token.
66
+ bos_token_id (`int`, *optional*, defaults to 1):
67
+ The id of the "beginning-of-sequence" token.
68
+ eos_token_id (`int`, *optional*, defaults to 2):
69
+ The id of the "end-of-sequence" token.
70
+ tie_word_embeddings (`bool`, *optional*, defaults to `False`):
71
+ Whether the model's input and output word embeddings should be tied.
72
+ rope_theta (`float`, *optional*, defaults to 10000.0):
73
+ The base period of the RoPE embeddings.
74
+ sliding_window (`int`, *optional*, defaults to 4096):
75
+ Sliding window attention window size. If not specified, will default to `4096`.
76
+ attention_dropout (`float`, *optional*, defaults to 0.0):
77
+ The dropout ratio for the attention probabilities.
78
+ ```python
79
+ >>> from transformers import MistralModel, MistralConfig
80
+ >>> # Initializing a Mistral 7B style configuration
81
+ >>> configuration = MistralConfig()
82
+ >>> # Initializing a model from the Mistral 7B style configuration
83
+ >>> model = MistralModel(configuration)
84
+ >>> # Accessing the model configuration
85
+ >>> configuration = model.config
86
+ ```"""
87
+
88
+ model_type = "mistral"
89
+ keys_to_ignore_at_inference = ["past_key_values"]
90
+
91
+ def __init__(
92
+ self,
93
+ vocab_size=32000,
94
+ hidden_size=4096,
95
+ intermediate_size=14336,
96
+ num_hidden_layers=32,
97
+ num_attention_heads=32,
98
+ num_key_value_heads=8,
99
+ hidden_act="silu",
100
+ max_position_embeddings=32768,
101
+ initializer_range=0.02,
102
+ rms_norm_eps=1e-6,
103
+ use_cache=True,
104
+ pad_token_id=None,
105
+ bos_token_id=1,
106
+ eos_token_id=2,
107
+ tie_word_embeddings=False,
108
+ rope_theta=10000.0,
109
+ sliding_window=4096,
110
+ attention_dropout=0.0,
111
+ moe_dtype="bfloat16",
112
+ moe_scaling=1.0,
113
+ num_experts=16,
114
+ topk=4,
115
+ output_router_logits=False,
116
+ adapter_dim=512,
117
+ adapter_dropout=0.0,
118
+ router_aux_loss_coef=0.01,
119
+ **kwargs,
120
+ ):
121
+ self.vocab_size = vocab_size
122
+ self.max_position_embeddings = max_position_embeddings
123
+ self.hidden_size = hidden_size
124
+ self.intermediate_size = intermediate_size
125
+ self.num_hidden_layers = num_hidden_layers
126
+ self.num_attention_heads = num_attention_heads
127
+ self.sliding_window = sliding_window
128
+
129
+ # for backward compatibility
130
+ if num_key_value_heads is None:
131
+ num_key_value_heads = num_attention_heads
132
+
133
+ self.num_key_value_heads = num_key_value_heads
134
+ self.hidden_act = hidden_act
135
+ self.initializer_range = initializer_range
136
+ self.rms_norm_eps = rms_norm_eps
137
+ self.use_cache = use_cache
138
+ self.rope_theta = rope_theta
139
+ self.attention_dropout = attention_dropout
140
+
141
+ self.moe_dtype = moe_dtype
142
+ self.moe_scaling = moe_scaling
143
+ self.num_experts = num_experts
144
+ self.topk = topk
145
+ self.output_router_logits = output_router_logits
146
+
147
+ self.adapter_dim = adapter_dim
148
+ self.adapter_dropout = adapter_dropout
149
+ self.router_aux_loss_coef = router_aux_loss_coef
150
+
151
+ super().__init__(
152
+ pad_token_id=pad_token_id,
153
+ bos_token_id=bos_token_id,
154
+ eos_token_id=eos_token_id,
155
+ tie_word_embeddings=tie_word_embeddings,
156
+ **kwargs,
157
+ )