Svngoku commited on
Commit
e1c2ebd
1 Parent(s): bacf4af

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +140 -2
README.md CHANGED
@@ -19,9 +19,9 @@ language:
19
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
20
  should probably proofread and complete it, then remove this comment. -->
21
 
22
- # Aya-23-8b-afrimmlu-lin
23
 
24
- This model is a fine-tuned version of [CohereForAI/aya-23-8b](https://huggingface.co/CohereForAI/aya-23-8b) on an [Masakhane/afrimmlu](https://huggingface.co/datasets/masakhane/afrimmlu/).
25
 
26
  ## Model description
27
 
@@ -37,6 +37,102 @@ More information needed
37
 
38
  ## Training procedure
39
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
  ### Training hyperparameters
41
 
42
  The following hyperparameters were used during training:
@@ -54,6 +150,48 @@ The following hyperparameters were used during training:
54
  ### Training results
55
 
56
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
 
58
  ### Framework versions
59
 
 
19
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
20
  should probably proofread and complete it, then remove this comment. -->
21
 
22
+ # Aya-23-8b Afrimmlu Lingala
23
 
24
+ This model is a fine-tuned version of [CohereForAI/aya-23-8b](https://huggingface.co/CohereForAI/aya-23-8b) on [Masakhane/afrimmlu](https://huggingface.co/datasets/masakhane/afrimmlu/).
25
 
26
  ## Model description
27
 
 
37
 
38
  ## Training procedure
39
 
40
+ ## Prompt Formating
41
+ ```py
42
+ def formatting_prompts_func(example):
43
+ output_texts = []
44
+ for i in range(len(example['choices'])):
45
+ text = f"<|START_OF_TURN_TOKEN|><|USER_TOKEN|>Question : {example['question'][i]}, Choices : {example['choices'][i]}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>{example['answer'][i]}"
46
+ output_texts.append(text)
47
+ return output_texts
48
+ ```
49
+
50
+ ## Model Architecture
51
+
52
+ ```txt
53
+ PeftModelForCausalLM(
54
+ (base_model): LoraModel(
55
+ (model): CohereForCausalLM(
56
+ (model): CohereModel(
57
+ (embed_tokens): Embedding(256000, 4096, padding_idx=0)
58
+ (layers): ModuleList(
59
+ (0-31): 32 x CohereDecoderLayer(
60
+ (self_attn): CohereAttention(
61
+ (q_proj): lora.Linear4bit(
62
+ (base_layer): Linear4bit(in_features=4096, out_features=4096, bias=False)
63
+ (lora_dropout): ModuleDict(
64
+ (default): Identity()
65
+ )
66
+ (lora_A): ModuleDict(
67
+ (default): Linear(in_features=4096, out_features=32, bias=False)
68
+ )
69
+ (lora_B): ModuleDict(
70
+ (default): Linear(in_features=32, out_features=4096, bias=False)
71
+ )
72
+ (lora_embedding_A): ParameterDict()
73
+ (lora_embedding_B): ParameterDict()
74
+ )
75
+ (k_proj): lora.Linear4bit(
76
+ (base_layer): Linear4bit(in_features=4096, out_features=1024, bias=False)
77
+ (lora_dropout): ModuleDict(
78
+ (default): Identity()
79
+ )
80
+ (lora_A): ModuleDict(
81
+ (default): Linear(in_features=4096, out_features=32, bias=False)
82
+ )
83
+ (lora_B): ModuleDict(
84
+ (default): Linear(in_features=32, out_features=1024, bias=False)
85
+ )
86
+ (lora_embedding_A): ParameterDict()
87
+ (lora_embedding_B): ParameterDict()
88
+ )
89
+ (v_proj): lora.Linear4bit(
90
+ (base_layer): Linear4bit(in_features=4096, out_features=1024, bias=False)
91
+ (lora_dropout): ModuleDict(
92
+ (default): Identity()
93
+ )
94
+ (lora_A): ModuleDict(
95
+ (default): Linear(in_features=4096, out_features=32, bias=False)
96
+ )
97
+ (lora_B): ModuleDict(
98
+ (default): Linear(in_features=32, out_features=1024, bias=False)
99
+ )
100
+ (lora_embedding_A): ParameterDict()
101
+ (lora_embedding_B): ParameterDict()
102
+ )
103
+ (o_proj): lora.Linear4bit(
104
+ (base_layer): Linear4bit(in_features=4096, out_features=4096, bias=False)
105
+ (lora_dropout): ModuleDict(
106
+ (default): Identity()
107
+ )
108
+ (lora_A): ModuleDict(
109
+ (default): Linear(in_features=4096, out_features=32, bias=False)
110
+ )
111
+ (lora_B): ModuleDict(
112
+ (default): Linear(in_features=32, out_features=4096, bias=False)
113
+ )
114
+ (lora_embedding_A): ParameterDict()
115
+ (lora_embedding_B): ParameterDict()
116
+ )
117
+ (rotary_emb): CohereRotaryEmbedding()
118
+ )
119
+ (mlp): CohereMLP(
120
+ (gate_proj): Linear4bit(in_features=4096, out_features=14336, bias=False)
121
+ (up_proj): Linear4bit(in_features=4096, out_features=14336, bias=False)
122
+ (down_proj): Linear4bit(in_features=14336, out_features=4096, bias=False)
123
+ (act_fn): SiLU()
124
+ )
125
+ (input_layernorm): CohereLayerNorm()
126
+ )
127
+ )
128
+ (norm): CohereLayerNorm()
129
+ )
130
+ (lm_head): Linear(in_features=4096, out_features=256000, bias=False)
131
+ )
132
+ )
133
+ )
134
+ ```
135
+
136
  ### Training hyperparameters
137
 
138
  The following hyperparameters were used during training:
 
150
  ### Training results
151
 
152
 
153
+ ## Inferennce
154
+
155
+ ```py
156
+ quantization_config = None
157
+ if QUANTIZE_4BIT:
158
+ quantization_config = BitsAndBytesConfig(
159
+ load_in_4bit=True,
160
+ bnb_4bit_quant_type="nf4",
161
+ bnb_4bit_use_double_quant=True,
162
+ bnb_4bit_compute_dtype=torch.bfloat16,
163
+ )
164
+
165
+ attn_implementation = None
166
+ if USE_FLASH_ATTENTION:
167
+ attn_implementation="flash_attention_2"
168
+
169
+ loaded_model = AutoModelForCausalLM.from_pretrained(
170
+ BASE_MODEL_NAME,
171
+ quantization_config=quantization_config,
172
+ attn_implementation=attn_implementation,
173
+ torch_dtype=torch.bfloat16,
174
+ device_map="auto",
175
+ )
176
+ tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL_NAME)
177
+ loaded_model.load_adapter("aya-23-8b-afrimmlu-lin")
178
+
179
+
180
+ prompts = [
181
+ """Question : Kati ya kondima mibale elandi, oyo wapi ezali nyoso mibale ya solo (na 2019) ?
182
+ Choices : ['Bato bazali na mposa ya kozala optimiste mpo na mikolo ekoya ya bomoi na bango na mpe na makambo ekoya ya ekolo na bango to mokili.', 'Bato bazali na mposa ya kozala optimiste mpo na mikolo ekoya ya bomoi na bango moko, kasi pessimiste na mikolo ekoya ya Ekolo na bango to mokili.', 'Bato bazali na mposa ya kozala pessimiste mpo na mikolo ekoya ya bomoi na bango, kasi optimiste na mikolo ekoya ya ekolo na bango to mokili.', 'Bato bazali na mposa ya kozala pessimiste mpo na mikolo ekoya ya bomoi na bango na mpe mikolo ekoya ya ekolo na bango to mokili.']
183
+ """
184
+ ]
185
+
186
+ generations = generate_aya_23(prompts, loaded_model)
187
+
188
+ for p, g in zip(prompts, generations):
189
+ print(
190
+ "PROMPT", p ,"RESPONSE", g, "\n", sep="\n"
191
+ )
192
+
193
+
194
+ ```
195
 
196
  ### Framework versions
197