eyad-silx commited on
Commit
9940f6c
·
verified ·
1 Parent(s): 35e3d0e

Upload configuration_quasar.py with huggingface_hub

Browse files
Files changed (1) hide show
  1. configuration_quasar.py +217 -0
configuration_quasar.py ADDED
@@ -0,0 +1,217 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ from transformers.configuration_utils import PretrainedConfig
3
+ from transformers.modeling_rope_utils import rope_config_validation
4
+
5
+
6
+ class QuasarConfig(PretrainedConfig):
7
+ r"""
8
+ This is the configuration class to store the configuration of a [`QuasarModel`]. It is used to instantiate a
9
+ Quasar model according to the specified arguments, defining the model architecture. Instantiating a configuration
10
+ with the defaults will yield a similar configuration to that of [Quasar-kwaii/Quasar-MoE](https://huggingface.co/Quasar/Quasar-MoE).
11
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
12
+ documentation from [`PretrainedConfig`] for more information.
13
+ Args:
14
+ vocab_size (`int`, *optional*, defaults to 151936):
15
+ Vocabulary size of the Quasar model. Defines the number of different tokens that can be represented by the
16
+ `inputs_ids` passed when calling [`QuasarModel`]
17
+ hidden_size (`int`, *optional*, defaults to 2048):
18
+ Dimension of the hidden representations.
19
+ intermediate_size (`int`, *optional*, defaults to 6144):
20
+ Dimension of the MLP representations.
21
+ num_hidden_layers (`int`, *optional*, defaults to 24):
22
+ Number of hidden layers in the Transformer encoder.
23
+ num_attention_heads (`int`, *optional*, defaults to 32):
24
+ Number of attention heads for each attention layer in the Transformer encoder.
25
+ num_key_value_heads (`int`, *optional*, defaults to 4):
26
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
27
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
28
+ `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
29
+ converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
30
+ by meanpooling all the original heads within that group. For more details, check out [this
31
+ paper](https://huggingface.co/papers/2305.13245). If it is not specified, will default to `32`.
32
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
33
+ The non-linear activation function (function or string) in the decoder.
34
+ max_position_embeddings (`int`, *optional*, defaults to 32768):
35
+ The maximum sequence length that this model might ever be used with.
36
+ initializer_range (`float`, *optional*, defaults to 0.02):
37
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
38
+ rms_norm_eps (`float`, *optional*, defaults to 1e-06):
39
+ The epsilon used by the rms normalization layers.
40
+ use_cache (`bool`, *optional*, defaults to `True`):
41
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
42
+ relevant if `config.is_decoder=True`.
43
+ tie_word_embeddings (`bool`, *optional*, defaults to `False`):
44
+ Whether the model's input and output word embeddings should be tied.
45
+ rope_theta (`float`, *optional*, defaults to 10000.0):
46
+ The base period of the RoPE embeddings.
47
+ rope_scaling (`Dict`, *optional*):
48
+ Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type
49
+ and you expect the model to work on longer `max_position_embeddings`, we recommend you to update this value
50
+ accordingly.
51
+ Expected contents:
52
+ `rope_type` (`str`):
53
+ The sub-variant of RoPE to use. Can be one of ['default', 'linear', 'dynamic', 'yarn', 'longrope',
54
+ 'llama3'], with 'default' being the original RoPE implementation.
55
+ `factor` (`float`, *optional*):
56
+ Used with all rope types except 'default'. The scaling factor to apply to the RoPE embeddings. In
57
+ most scaling types, a `factor` of x will enable the model to handle sequences of length x *
58
+ original maximum pre-trained length.
59
+ `original_max_position_embeddings` (`int`, *optional*):
60
+ Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during
61
+ pretraining.
62
+ `attention_factor` (`float`, *optional*):
63
+ Used with 'yarn' and 'longrope'. The scaling factor to be applied on the attention
64
+ computation. If unspecified, it defaults to value recommended by the implementation, using the
65
+ `factor` field to infer the suggested value.
66
+ `beta_fast` (`float`, *optional*):
67
+ Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear
68
+ ramp function. If unspecified, it defaults to 32.
69
+ `beta_slow` (`float`, *optional*):
70
+ Only used with 'yarn'. Parameter to set the boundary for interpolation (only) in the linear
71
+ ramp function. If unspecified, it defaults to 1.
72
+ `short_factor` (`list[float]`, *optional*):
73
+ Only used with 'longrope'. The scaling factor to be applied to short contexts (<
74
+ `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
75
+ size divided by the number of attention heads divided by 2
76
+ `long_factor` (`list[float]`, *optional*):
77
+ Only used with 'longrope'. The scaling factor to be applied to long contexts (<
78
+ `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
79
+ size divided by the number of attention heads divided by 2
80
+ `low_freq_factor` (`float`, *optional*):
81
+ Only used with 'llama3'. Scaling factor applied to low frequency components of the RoPE
82
+ `high_freq_factor` (`float`, *optional*):
83
+ Only used with 'llama3'. Scaling factor applied to high frequency components of the RoPE
84
+ attention_bias (`bool`, defaults to `False`, *optional*, defaults to `False`):
85
+ Whether to use a bias in the query, key, value and output projection layers during self-attention.
86
+ use_sliding_window (`bool`, *optional*, defaults to `False`):
87
+ Whether to use sliding window attention.
88
+ sliding_window (`int`, *optional*, defaults to 4096):
89
+ Sliding window attention (SWA) window size. If not specified, will default to `4096`.
90
+ attention_dropout (`float`, *optional*, defaults to 0.0):
91
+ The dropout ratio for the attention probabilities.
92
+ decoder_sparse_step (`int`, *optional*, defaults to 1):
93
+ The frequency of the MoE layer.
94
+ moe_intermediate_size (`int`, *optional*, defaults to 768):
95
+ Intermediate size of the routed expert.
96
+ num_experts_per_tok (`int`, *optional*, defaults to 8):
97
+ Number of selected experts.
98
+ num_experts (`int`, *optional*, defaults to 128):
99
+ Number of routed experts.
100
+ norm_topk_prob (`bool`, *optional*, defaults to `False`):
101
+ Whether to normalize the topk probabilities.
102
+ output_router_logits (`bool`, *optional*, defaults to `False`):
103
+ Whether or not the router logits should be returned by the model. Enabling this will also
104
+ allow the model to output the auxiliary loss, including load balancing loss and router z-loss.
105
+ router_aux_loss_coef (`float`, *optional*, defaults to 0.001):
106
+ The aux loss factor for the total loss.
107
+ mlp_only_layers (`list[int]`, *optional*, defaults to `[]`):
108
+ Indicate which layers use QuasarMLP rather than QuasarSparseMoeBlock
109
+ The list contains layer index, from 0 to num_layers-1 if we have num_layers layers
110
+ If `mlp_only_layers` is empty, `decoder_sparse_step` is used to determine the sparsity.
111
+ ```python
112
+ >>> from transformers import QuasarModel, QuasarConfig
113
+ >>> # Initializing a Quasar style configuration
114
+ >>> configuration = QuasarConfig()
115
+ >>> # Initializing a model from the Quasar-MoE" style configuration
116
+ >>> model = QuasarModel(configuration)
117
+ >>> # Accessing the model configuration
118
+ >>> configuration = model.config
119
+ ```"""
120
+
121
+ model_type = "Quasar"
122
+ keys_to_ignore_at_inference = ["past_key_values"]
123
+
124
+ # Default tensor parallel plan for base model `Quasar`
125
+ base_model_tp_plan = {
126
+ "layers.*.self_attn.q_proj": "colwise",
127
+ "layers.*.self_attn.k_proj": "colwise",
128
+ "layers.*.self_attn.v_proj": "colwise",
129
+ "layers.*.self_attn.o_proj": "rowwise",
130
+ "layers.*.mlp.experts.*.gate_proj": "colwise",
131
+ "layers.*.mlp.experts.*.up_proj": "colwise",
132
+ "layers.*.mlp.experts.*.down_proj": "rowwise",
133
+ "layers.*.mlp.gate_proj": "colwise",
134
+ "layers.*.mlp.up_proj": "colwise",
135
+ "layers.*.mlp.down_proj": "rowwise",
136
+ }
137
+ base_model_pp_plan = {
138
+ "embed_tokens": (["input_ids"], ["inputs_embeds"]),
139
+ "layers": (["hidden_states", "attention_mask"], ["hidden_states"]),
140
+ "norm": (["hidden_states"], ["hidden_states"]),
141
+ }
142
+
143
+ def __init__(
144
+ self,
145
+ vocab_size=151936,
146
+ hidden_size=2048,
147
+ intermediate_size=6144,
148
+ num_hidden_layers=24,
149
+ num_attention_heads=32,
150
+ num_key_value_heads=4,
151
+ hidden_act="silu",
152
+ max_position_embeddings=32768,
153
+ initializer_range=0.02,
154
+ rms_norm_eps=1e-6,
155
+ use_cache=True,
156
+ tie_word_embeddings=False,
157
+ rope_theta=10000.0,
158
+ rope_scaling=None,
159
+ attention_bias=False,
160
+ use_sliding_window=False,
161
+ sliding_window=4096,
162
+ attention_dropout=0.0,
163
+ decoder_sparse_step=1,
164
+ moe_intermediate_size=768,
165
+ num_experts_per_tok=8,
166
+ num_experts=128,
167
+ norm_topk_prob=True,
168
+ output_router_logits=False,
169
+ router_aux_loss_coef=0.001,
170
+ mlp_only_layers=None,
171
+ routed_scaling_factor=2.5,
172
+ n_shared_experts=1,
173
+ **kwargs,
174
+ ):
175
+ super().__init__(
176
+ tie_word_embeddings=tie_word_embeddings,
177
+ **kwargs,
178
+ )
179
+ self.vocab_size = vocab_size
180
+ self.max_position_embeddings = max_position_embeddings
181
+ self.hidden_size = hidden_size
182
+ self.intermediate_size = intermediate_size
183
+ self.num_hidden_layers = num_hidden_layers
184
+ self.num_attention_heads = num_attention_heads
185
+ self.use_sliding_window = use_sliding_window
186
+ self.sliding_window = sliding_window if use_sliding_window else None
187
+
188
+ self.num_key_value_heads = num_key_value_heads
189
+ self.hidden_act = hidden_act
190
+ self.initializer_range = initializer_range
191
+ self.rms_norm_eps = rms_norm_eps
192
+ self.use_cache = use_cache
193
+ self.rope_theta = rope_theta
194
+ self.rope_scaling = rope_scaling
195
+ self.attention_bias = attention_bias
196
+ self.attention_dropout = attention_dropout
197
+ # Validate the correctness of rotary position embeddings parameters
198
+ # BC: if there is a 'type' field, move it to 'rope_type'.
199
+ if self.rope_scaling is not None and "type" in self.rope_scaling:
200
+ self.rope_scaling["rope_type"] = self.rope_scaling["type"]
201
+ rope_config_validation(self)
202
+
203
+ # MoE arguments
204
+ self.decoder_sparse_step = decoder_sparse_step
205
+ self.moe_intermediate_size = moe_intermediate_size
206
+ self.num_experts_per_tok = num_experts_per_tok
207
+ self.num_experts = num_experts
208
+ self.norm_topk_prob = norm_topk_prob
209
+ self.output_router_logits = output_router_logits
210
+ self.router_aux_loss_coef = router_aux_loss_coef
211
+ self.mlp_only_layers = [] if mlp_only_layers is None else mlp_only_layers
212
+
213
+ self.routed_scaling_factor = routed_scaling_factor
214
+ self.n_shared_experts = n_shared_experts
215
+
216
+
217
+ __all__ = ["QuasarConfig"]