Surabhi-K commited on
Commit
6b0fcb8
1 Parent(s): c4e5d2a

Upload 14 files

Browse files
README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ base_model: meta-llama/Meta-Llama-3-8B
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.7.1
adapter_config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "meta-llama/Meta-Llama-3-8B",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layers_pattern": null,
10
+ "layers_to_transform": null,
11
+ "loftq_config": {},
12
+ "lora_alpha": 32,
13
+ "lora_dropout": 0.05,
14
+ "megatron_config": null,
15
+ "megatron_core": "megatron.core",
16
+ "modules_to_save": null,
17
+ "peft_type": "LORA",
18
+ "r": 16,
19
+ "rank_pattern": {},
20
+ "revision": null,
21
+ "target_modules": [
22
+ "down_proj",
23
+ "up_proj",
24
+ "o_proj",
25
+ "q_proj",
26
+ "v_proj",
27
+ "gate_proj",
28
+ "k_proj"
29
+ ],
30
+ "task_type": "CAUSAL_LM"
31
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac3fa2b1c2b2f0dab0f3b5ce7e696d0e5f5f2ae1b672190d13de884428872a45
3
+ size 167832240
conda-environment.yaml ADDED
File without changes
config.yaml ADDED
@@ -0,0 +1,680 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ wandb_version: 1
2
+
3
+ _wandb:
4
+ desc: null
5
+ value:
6
+ python_version: 3.10.13
7
+ cli_version: 0.16.6
8
+ framework: huggingface
9
+ huggingface_version: 4.36.2
10
+ is_jupyter_run: true
11
+ is_kaggle_kernel: true
12
+ start_time: 1714028759.0
13
+ t:
14
+ 1:
15
+ - 1
16
+ - 2
17
+ - 3
18
+ - 5
19
+ - 11
20
+ - 12
21
+ - 49
22
+ - 51
23
+ - 53
24
+ - 55
25
+ - 71
26
+ - 98
27
+ - 105
28
+ 2:
29
+ - 1
30
+ - 2
31
+ - 3
32
+ - 5
33
+ - 11
34
+ - 12
35
+ - 49
36
+ - 51
37
+ - 53
38
+ - 55
39
+ - 71
40
+ - 98
41
+ - 105
42
+ 3:
43
+ - 7
44
+ - 13
45
+ - 23
46
+ 4: 3.10.13
47
+ 5: 0.16.6
48
+ 6: 4.36.2
49
+ 8:
50
+ - 1
51
+ - 2
52
+ - 5
53
+ 9:
54
+ 1: transformers_trainer
55
+ 13: linux-x86_64
56
+ m:
57
+ - 1: train/global_step
58
+ 6:
59
+ - 3
60
+ - 1: train/loss
61
+ 5: 1
62
+ 6:
63
+ - 1
64
+ - 1: train/learning_rate
65
+ 5: 1
66
+ 6:
67
+ - 1
68
+ - 1: train/epoch
69
+ 5: 1
70
+ 6:
71
+ - 1
72
+ - 1: eval/loss
73
+ 5: 1
74
+ 6:
75
+ - 1
76
+ - 1: eval/runtime
77
+ 5: 1
78
+ 6:
79
+ - 1
80
+ - 1: eval/samples_per_second
81
+ 5: 1
82
+ 6:
83
+ - 1
84
+ - 1: eval/steps_per_second
85
+ 5: 1
86
+ 6:
87
+ - 1
88
+ vocab_size:
89
+ desc: null
90
+ value: 128256
91
+ max_position_embeddings:
92
+ desc: null
93
+ value: 8192
94
+ hidden_size:
95
+ desc: null
96
+ value: 4096
97
+ intermediate_size:
98
+ desc: null
99
+ value: 14336
100
+ num_hidden_layers:
101
+ desc: null
102
+ value: 32
103
+ num_attention_heads:
104
+ desc: null
105
+ value: 32
106
+ num_key_value_heads:
107
+ desc: null
108
+ value: 8
109
+ hidden_act:
110
+ desc: null
111
+ value: silu
112
+ initializer_range:
113
+ desc: null
114
+ value: 0.02
115
+ rms_norm_eps:
116
+ desc: null
117
+ value: 1.0e-05
118
+ pretraining_tp:
119
+ desc: null
120
+ value: 1
121
+ use_cache:
122
+ desc: null
123
+ value: false
124
+ rope_theta:
125
+ desc: null
126
+ value: 500000.0
127
+ rope_scaling:
128
+ desc: null
129
+ value: null
130
+ attention_bias:
131
+ desc: null
132
+ value: false
133
+ attention_dropout:
134
+ desc: null
135
+ value: 0.0
136
+ return_dict:
137
+ desc: null
138
+ value: true
139
+ output_hidden_states:
140
+ desc: null
141
+ value: false
142
+ output_attentions:
143
+ desc: null
144
+ value: false
145
+ torchscript:
146
+ desc: null
147
+ value: false
148
+ torch_dtype:
149
+ desc: null
150
+ value: bfloat16
151
+ use_bfloat16:
152
+ desc: null
153
+ value: false
154
+ tf_legacy_loss:
155
+ desc: null
156
+ value: false
157
+ pruned_heads:
158
+ desc: null
159
+ value: {}
160
+ tie_word_embeddings:
161
+ desc: null
162
+ value: false
163
+ is_encoder_decoder:
164
+ desc: null
165
+ value: false
166
+ is_decoder:
167
+ desc: null
168
+ value: false
169
+ cross_attention_hidden_size:
170
+ desc: null
171
+ value: null
172
+ add_cross_attention:
173
+ desc: null
174
+ value: false
175
+ tie_encoder_decoder:
176
+ desc: null
177
+ value: false
178
+ max_length:
179
+ desc: null
180
+ value: 20
181
+ min_length:
182
+ desc: null
183
+ value: 0
184
+ do_sample:
185
+ desc: null
186
+ value: false
187
+ early_stopping:
188
+ desc: null
189
+ value: false
190
+ num_beams:
191
+ desc: null
192
+ value: 1
193
+ num_beam_groups:
194
+ desc: null
195
+ value: 1
196
+ diversity_penalty:
197
+ desc: null
198
+ value: 0.0
199
+ temperature:
200
+ desc: null
201
+ value: 1.0
202
+ top_k:
203
+ desc: null
204
+ value: 50
205
+ top_p:
206
+ desc: null
207
+ value: 1.0
208
+ typical_p:
209
+ desc: null
210
+ value: 1.0
211
+ repetition_penalty:
212
+ desc: null
213
+ value: 1.0
214
+ length_penalty:
215
+ desc: null
216
+ value: 1.0
217
+ no_repeat_ngram_size:
218
+ desc: null
219
+ value: 0
220
+ encoder_no_repeat_ngram_size:
221
+ desc: null
222
+ value: 0
223
+ bad_words_ids:
224
+ desc: null
225
+ value: null
226
+ num_return_sequences:
227
+ desc: null
228
+ value: 1
229
+ chunk_size_feed_forward:
230
+ desc: null
231
+ value: 0
232
+ output_scores:
233
+ desc: null
234
+ value: false
235
+ return_dict_in_generate:
236
+ desc: null
237
+ value: false
238
+ forced_bos_token_id:
239
+ desc: null
240
+ value: null
241
+ forced_eos_token_id:
242
+ desc: null
243
+ value: null
244
+ remove_invalid_values:
245
+ desc: null
246
+ value: false
247
+ exponential_decay_length_penalty:
248
+ desc: null
249
+ value: null
250
+ suppress_tokens:
251
+ desc: null
252
+ value: null
253
+ begin_suppress_tokens:
254
+ desc: null
255
+ value: null
256
+ architectures:
257
+ desc: null
258
+ value:
259
+ - LlamaForCausalLM
260
+ finetuning_task:
261
+ desc: null
262
+ value: null
263
+ id2label:
264
+ desc: null
265
+ value:
266
+ '0': LABEL_0
267
+ '1': LABEL_1
268
+ label2id:
269
+ desc: null
270
+ value:
271
+ LABEL_0: 0
272
+ LABEL_1: 1
273
+ tokenizer_class:
274
+ desc: null
275
+ value: null
276
+ prefix:
277
+ desc: null
278
+ value: null
279
+ bos_token_id:
280
+ desc: null
281
+ value: 128000
282
+ pad_token_id:
283
+ desc: null
284
+ value: null
285
+ eos_token_id:
286
+ desc: null
287
+ value: 128001
288
+ sep_token_id:
289
+ desc: null
290
+ value: null
291
+ decoder_start_token_id:
292
+ desc: null
293
+ value: null
294
+ task_specific_params:
295
+ desc: null
296
+ value: null
297
+ problem_type:
298
+ desc: null
299
+ value: null
300
+ _name_or_path:
301
+ desc: null
302
+ value: meta-llama/Meta-Llama-3-8B
303
+ transformers_version:
304
+ desc: null
305
+ value: 4.36.2
306
+ model_type:
307
+ desc: null
308
+ value: llama
309
+ quantization_config:
310
+ desc: null
311
+ value:
312
+ quant_method: QuantizationMethod.BITS_AND_BYTES
313
+ load_in_8bit: false
314
+ load_in_4bit: true
315
+ llm_int8_threshold: 6.0
316
+ llm_int8_skip_modules: null
317
+ llm_int8_enable_fp32_cpu_offload: true
318
+ llm_int8_has_fp16_weight: false
319
+ bnb_4bit_quant_type: nf4
320
+ bnb_4bit_use_double_quant: true
321
+ bnb_4bit_compute_dtype: bfloat16
322
+ output_dir:
323
+ desc: null
324
+ value: /kaggle/working/trainer/
325
+ overwrite_output_dir:
326
+ desc: null
327
+ value: false
328
+ do_train:
329
+ desc: null
330
+ value: false
331
+ do_eval:
332
+ desc: null
333
+ value: true
334
+ do_predict:
335
+ desc: null
336
+ value: false
337
+ evaluation_strategy:
338
+ desc: null
339
+ value: epoch
340
+ prediction_loss_only:
341
+ desc: null
342
+ value: false
343
+ per_device_train_batch_size:
344
+ desc: null
345
+ value: 2
346
+ per_device_eval_batch_size:
347
+ desc: null
348
+ value: 2
349
+ per_gpu_train_batch_size:
350
+ desc: null
351
+ value: null
352
+ per_gpu_eval_batch_size:
353
+ desc: null
354
+ value: null
355
+ gradient_accumulation_steps:
356
+ desc: null
357
+ value: 4
358
+ eval_accumulation_steps:
359
+ desc: null
360
+ value: null
361
+ eval_delay:
362
+ desc: null
363
+ value: 0
364
+ learning_rate:
365
+ desc: null
366
+ value: 5.0e-05
367
+ weight_decay:
368
+ desc: null
369
+ value: 0.01
370
+ adam_beta1:
371
+ desc: null
372
+ value: 0.9
373
+ adam_beta2:
374
+ desc: null
375
+ value: 0.999
376
+ adam_epsilon:
377
+ desc: null
378
+ value: 1.0e-08
379
+ max_grad_norm:
380
+ desc: null
381
+ value: 1.0
382
+ num_train_epochs:
383
+ desc: null
384
+ value: 5
385
+ max_steps:
386
+ desc: null
387
+ value: -1
388
+ lr_scheduler_type:
389
+ desc: null
390
+ value: linear
391
+ lr_scheduler_kwargs:
392
+ desc: null
393
+ value: {}
394
+ warmup_ratio:
395
+ desc: null
396
+ value: 0.0
397
+ warmup_steps:
398
+ desc: null
399
+ value: 50
400
+ log_level:
401
+ desc: null
402
+ value: passive
403
+ log_level_replica:
404
+ desc: null
405
+ value: warning
406
+ log_on_each_node:
407
+ desc: null
408
+ value: true
409
+ logging_dir:
410
+ desc: null
411
+ value: /kaggle/working/logs
412
+ logging_strategy:
413
+ desc: null
414
+ value: epoch
415
+ logging_first_step:
416
+ desc: null
417
+ value: false
418
+ logging_steps:
419
+ desc: null
420
+ value: 500
421
+ logging_nan_inf_filter:
422
+ desc: null
423
+ value: true
424
+ save_strategy:
425
+ desc: null
426
+ value: epoch
427
+ save_steps:
428
+ desc: null
429
+ value: 500
430
+ save_total_limit:
431
+ desc: null
432
+ value: 5
433
+ save_safetensors:
434
+ desc: null
435
+ value: true
436
+ save_on_each_node:
437
+ desc: null
438
+ value: false
439
+ save_only_model:
440
+ desc: null
441
+ value: false
442
+ no_cuda:
443
+ desc: null
444
+ value: false
445
+ use_cpu:
446
+ desc: null
447
+ value: false
448
+ use_mps_device:
449
+ desc: null
450
+ value: false
451
+ seed:
452
+ desc: null
453
+ value: 42
454
+ data_seed:
455
+ desc: null
456
+ value: null
457
+ jit_mode_eval:
458
+ desc: null
459
+ value: false
460
+ use_ipex:
461
+ desc: null
462
+ value: false
463
+ bf16:
464
+ desc: null
465
+ value: false
466
+ fp16:
467
+ desc: null
468
+ value: true
469
+ fp16_opt_level:
470
+ desc: null
471
+ value: O1
472
+ half_precision_backend:
473
+ desc: null
474
+ value: auto
475
+ bf16_full_eval:
476
+ desc: null
477
+ value: false
478
+ fp16_full_eval:
479
+ desc: null
480
+ value: false
481
+ tf32:
482
+ desc: null
483
+ value: null
484
+ local_rank:
485
+ desc: null
486
+ value: 0
487
+ ddp_backend:
488
+ desc: null
489
+ value: null
490
+ tpu_num_cores:
491
+ desc: null
492
+ value: null
493
+ tpu_metrics_debug:
494
+ desc: null
495
+ value: false
496
+ debug:
497
+ desc: null
498
+ value: []
499
+ dataloader_drop_last:
500
+ desc: null
501
+ value: false
502
+ eval_steps:
503
+ desc: null
504
+ value: null
505
+ dataloader_num_workers:
506
+ desc: null
507
+ value: 0
508
+ past_index:
509
+ desc: null
510
+ value: -1
511
+ run_name:
512
+ desc: null
513
+ value: model_1_5epochs
514
+ disable_tqdm:
515
+ desc: null
516
+ value: false
517
+ remove_unused_columns:
518
+ desc: null
519
+ value: true
520
+ label_names:
521
+ desc: null
522
+ value: null
523
+ load_best_model_at_end:
524
+ desc: null
525
+ value: true
526
+ metric_for_best_model:
527
+ desc: null
528
+ value: loss
529
+ greater_is_better:
530
+ desc: null
531
+ value: false
532
+ ignore_data_skip:
533
+ desc: null
534
+ value: false
535
+ fsdp:
536
+ desc: null
537
+ value: []
538
+ fsdp_min_num_params:
539
+ desc: null
540
+ value: 0
541
+ fsdp_config:
542
+ desc: null
543
+ value:
544
+ min_num_params: 0
545
+ xla: false
546
+ xla_fsdp_grad_ckpt: false
547
+ fsdp_transformer_layer_cls_to_wrap:
548
+ desc: null
549
+ value: null
550
+ deepspeed:
551
+ desc: null
552
+ value: null
553
+ label_smoothing_factor:
554
+ desc: null
555
+ value: 0.0
556
+ optim:
557
+ desc: null
558
+ value: paged_adamw_8bit
559
+ optim_args:
560
+ desc: null
561
+ value: null
562
+ adafactor:
563
+ desc: null
564
+ value: false
565
+ group_by_length:
566
+ desc: null
567
+ value: false
568
+ length_column_name:
569
+ desc: null
570
+ value: length
571
+ report_to:
572
+ desc: null
573
+ value:
574
+ - wandb
575
+ ddp_find_unused_parameters:
576
+ desc: null
577
+ value: null
578
+ ddp_bucket_cap_mb:
579
+ desc: null
580
+ value: null
581
+ ddp_broadcast_buffers:
582
+ desc: null
583
+ value: null
584
+ dataloader_pin_memory:
585
+ desc: null
586
+ value: true
587
+ dataloader_persistent_workers:
588
+ desc: null
589
+ value: false
590
+ skip_memory_metrics:
591
+ desc: null
592
+ value: true
593
+ use_legacy_prediction_loop:
594
+ desc: null
595
+ value: false
596
+ push_to_hub:
597
+ desc: null
598
+ value: false
599
+ resume_from_checkpoint:
600
+ desc: null
601
+ value: null
602
+ hub_model_id:
603
+ desc: null
604
+ value: null
605
+ hub_strategy:
606
+ desc: null
607
+ value: every_save
608
+ hub_token:
609
+ desc: null
610
+ value: <HUB_TOKEN>
611
+ hub_private_repo:
612
+ desc: null
613
+ value: false
614
+ hub_always_push:
615
+ desc: null
616
+ value: false
617
+ gradient_checkpointing:
618
+ desc: null
619
+ value: true
620
+ gradient_checkpointing_kwargs:
621
+ desc: null
622
+ value:
623
+ use_reentrant: false
624
+ include_inputs_for_metrics:
625
+ desc: null
626
+ value: false
627
+ fp16_backend:
628
+ desc: null
629
+ value: auto
630
+ push_to_hub_model_id:
631
+ desc: null
632
+ value: null
633
+ push_to_hub_organization:
634
+ desc: null
635
+ value: null
636
+ push_to_hub_token:
637
+ desc: null
638
+ value: <PUSH_TO_HUB_TOKEN>
639
+ mp_parameters:
640
+ desc: null
641
+ value: ''
642
+ auto_find_batch_size:
643
+ desc: null
644
+ value: false
645
+ full_determinism:
646
+ desc: null
647
+ value: false
648
+ torchdynamo:
649
+ desc: null
650
+ value: null
651
+ ray_scope:
652
+ desc: null
653
+ value: last
654
+ ddp_timeout:
655
+ desc: null
656
+ value: 1800
657
+ torch_compile:
658
+ desc: null
659
+ value: false
660
+ torch_compile_backend:
661
+ desc: null
662
+ value: null
663
+ torch_compile_mode:
664
+ desc: null
665
+ value: null
666
+ dispatch_batches:
667
+ desc: null
668
+ value: null
669
+ split_batches:
670
+ desc: null
671
+ value: false
672
+ include_tokens_per_second:
673
+ desc: null
674
+ value: false
675
+ include_num_input_tokens_seen:
676
+ desc: null
677
+ value: false
678
+ neftune_noise_alpha:
679
+ desc: null
680
+ value: null
optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13178e5090d1c2e736aa36acf28ae3fa57df1196133cb7820a60e0f21c6077cb
3
+ size 84581014
output.log ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
2
+ To disable this warning, you can either:
3
+ - Avoid using `tokenizers` before the fork if possible
4
+ - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
5
+ /bin/bash: nvdia-smi: command not found
6
+ huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
7
+ To disable this warning, you can either:
8
+ - Avoid using `tokenizers` before the fork if possible
9
+ - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
10
+ adding: kaggle/working/ (stored 0%)
11
+ adding: kaggle/working/test.csv (deflated 81%)
12
+ adding: kaggle/working/trainer/ (stored 0%)
13
+ adding: kaggle/working/trainer/README.md (deflated 48%)
14
+ adding: kaggle/working/trainer/adapter_config.json (deflated 52%)
15
+ adding: kaggle/working/trainer/checkpoint-118/ (stored 0%)
16
+ adding: kaggle/working/trainer/checkpoint-118/rng_state.pth (deflated 25%)
17
+ adding: kaggle/working/trainer/checkpoint-118/optimizer.pt (deflated 16%)
18
+ adding: kaggle/working/trainer/checkpoint-118/README.md (deflated 66%)
19
+ adding: kaggle/working/trainer/checkpoint-118/scheduler.pt (deflated 56%)
20
+ adding: kaggle/working/trainer/checkpoint-118/adapter_config.json (deflated 52%)
21
+ adding: kaggle/working/trainer/checkpoint-118/training_args.bin (deflated 51%)
22
+ adding: kaggle/working/trainer/checkpoint-118/trainer_state.json (deflated 55%)
23
+ adding: kaggle/working/trainer/checkpoint-118/adapter_model.safetensors (deflated 8%)
24
+ adding: kaggle/working/trainer/checkpoint-472/ (stored 0%)
25
+ adding: kaggle/working/trainer/checkpoint-472/rng_state.pth (deflated 25%)
26
+ adding: kaggle/working/trainer/checkpoint-472/optimizer.pt (deflated 16%)
27
+ adding: kaggle/working/trainer/checkpoint-472/README.md (deflated 66%)
28
+ adding: kaggle/working/trainer/checkpoint-472/scheduler.pt (deflated 55%)
29
+ adding: kaggle/working/trainer/checkpoint-472/adapter_config.json (deflated 52%)
30
+ adding: kaggle/working/trainer/checkpoint-472/training_args.bin (deflated 51%)
31
+ adding: kaggle/working/trainer/checkpoint-472/trainer_state.json (deflated 71%)
32
+ adding: kaggle/working/trainer/checkpoint-472/adapter_model.safetensors (deflated 7%)
33
+ adding: kaggle/working/trainer/checkpoint-236/ (stored 0%)
34
+ adding: kaggle/working/trainer/checkpoint-236/rng_state.pth (deflated 25%)
35
+ adding: kaggle/working/trainer/checkpoint-236/optimizer.pt (deflated 16%)
36
+ adding: kaggle/working/trainer/checkpoint-236/README.md (deflated 66%)
37
+ adding: kaggle/working/trainer/checkpoint-236/scheduler.pt (deflated 56%)
38
+ adding: kaggle/working/trainer/checkpoint-236/adapter_config.json (deflated 52%)
39
+ adding: kaggle/working/trainer/checkpoint-236/training_args.bin (deflated 51%)
40
+ adding: kaggle/working/trainer/checkpoint-236/trainer_state.json (deflated 63%)
41
+ adding: kaggle/working/trainer/checkpoint-236/adapter_model.safetensors (deflated 7%)
42
+ adding: kaggle/working/trainer/training_args.bin (deflated 51%)
43
+ adding: kaggle/working/trainer/checkpoint-354/ (stored 0%)
44
+ adding: kaggle/working/trainer/checkpoint-354/rng_state.pth (deflated 25%)
45
+ adding: kaggle/working/trainer/checkpoint-354/optimizer.pt (deflated 16%)
46
+ adding: kaggle/working/trainer/checkpoint-354/README.md (deflated 66%)
47
+ adding: kaggle/working/trainer/checkpoint-354/scheduler.pt (deflated 55%)
48
+ adding: kaggle/working/trainer/checkpoint-354/adapter_config.json (deflated 52%)
49
+ adding: kaggle/working/trainer/checkpoint-354/training_args.bin (deflated 51%)
50
+ adding: kaggle/working/trainer/checkpoint-354/trainer_state.json (deflated 68%)
51
+ adding: kaggle/working/trainer/checkpoint-354/adapter_model.safetensors (deflated 7%)
requirements.txt ADDED
@@ -0,0 +1,862 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Babel==2.14.0
2
+ Boruta==0.3
3
+ Brotli==1.0.9
4
+ CVXcanon==0.1.2
5
+ Cartopy==0.23.0
6
+ Cython==3.0.8
7
+ Deprecated==1.2.14
8
+ Farama-Notifications==0.0.4
9
+ Flask==3.0.3
10
+ Geohash==1.0
11
+ GitPython==3.1.41
12
+ ImageHash==4.3.1
13
+ Janome==0.5.0
14
+ Jinja2==3.1.2
15
+ LunarCalendar==0.0.9
16
+ Mako==1.3.3
17
+ Markdown==3.5.2
18
+ MarkupSafe==2.1.3
19
+ MarkupSafe==2.1.5
20
+ Pillow==9.5.0
21
+ PuLP==2.8.0
22
+ PyArabic==0.6.15
23
+ PyJWT==2.8.0
24
+ PyMeeus==0.5.12
25
+ PySocks==1.7.1
26
+ PyUpSet==0.1.1.post7
27
+ PyWavelets==1.5.0
28
+ PyYAML==6.0.1
29
+ Pygments==2.17.2
30
+ Pympler==1.0.1
31
+ QtPy==2.4.1
32
+ Rtree==1.2.0
33
+ SQLAlchemy==2.0.25
34
+ SecretStorage==3.3.3
35
+ Send2Trash==1.8.2
36
+ Shapely==1.8.5.post1
37
+ Shimmy==1.3.0
38
+ SimpleITK==2.3.1
39
+ TPOT==0.12.1
40
+ Theano-PyMC==1.1.2
41
+ Theano==1.0.5
42
+ Wand==0.6.13
43
+ Werkzeug==3.0.2
44
+ absl-py==1.4.0
45
+ accelerate==0.25.0
46
+ access==1.1.9
47
+ affine==2.4.0
48
+ aiobotocore==2.12.3
49
+ aiofiles==22.1.0
50
+ aiohttp-cors==0.7.0
51
+ aiohttp==3.9.1
52
+ aioitertools==0.11.0
53
+ aiorwlock==1.3.0
54
+ aiosignal==1.3.1
55
+ aiosqlite==0.19.0
56
+ albumentations==1.4.0
57
+ alembic==1.13.1
58
+ altair==5.3.0
59
+ annotated-types==0.6.0
60
+ annoy==1.17.3
61
+ anyio==4.2.0
62
+ apache-beam==2.46.0
63
+ aplus==0.11.0
64
+ appdirs==1.4.4
65
+ archspec==0.2.3
66
+ argon2-cffi-bindings==21.2.0
67
+ argon2-cffi==23.1.0
68
+ array-record==0.5.0
69
+ arrow==1.3.0
70
+ arviz==0.18.0
71
+ astroid==3.1.0
72
+ astropy-iers-data==0.2024.4.15.2.45.49
73
+ astropy==6.0.1
74
+ asttokens==2.4.1
75
+ astunparse==1.6.3
76
+ async-lru==2.0.4
77
+ async-timeout==4.0.3
78
+ attrs==23.2.0
79
+ audioread==3.0.1
80
+ autopep8==2.0.4
81
+ backoff==2.2.1
82
+ bayesian-optimization==1.4.3
83
+ beatrix_jupyterlab==2023.128.151533
84
+ beautifulsoup4==4.12.2
85
+ bitsandbytes==0.41.3
86
+ blake3==0.2.1
87
+ bleach==6.1.0
88
+ blessed==1.20.0
89
+ blinker==1.7.0
90
+ blis==0.7.10
91
+ blosc2==2.6.2
92
+ bokeh==3.4.1
93
+ boltons==23.1.1
94
+ boto3==1.26.100
95
+ botocore==1.34.69
96
+ bq_helper==0.4.1
97
+ bqplot==0.12.43
98
+ branca==0.7.1
99
+ brewer2mpl==1.4.1
100
+ brotlipy==0.7.0
101
+ cached-property==1.5.2
102
+ cachetools==4.2.4
103
+ cachetools==5.3.2
104
+ catalogue==2.0.10
105
+ catalyst==22.4
106
+ catboost==1.2.3
107
+ category-encoders==2.6.3
108
+ certifi==2024.2.2
109
+ cesium==0.12.1
110
+ cffi==1.16.0
111
+ charset-normalizer==3.3.2
112
+ chex==0.1.86
113
+ cleverhans==4.0.0
114
+ click-plugins==1.1.1
115
+ click==8.1.7
116
+ cligj==0.7.2
117
+ cloud-tpu-client==0.10
118
+ cloud-tpu-profiler==2.4.0
119
+ cloudpathlib==0.16.0
120
+ cloudpickle==2.2.1
121
+ cloudpickle==3.0.0
122
+ cmdstanpy==1.2.2
123
+ colorama==0.4.6
124
+ colorcet==3.1.0
125
+ colorful==0.5.6
126
+ colorlog==6.8.2
127
+ colorlover==0.3.0
128
+ comm==0.2.1
129
+ conda-libmamba-solver==23.7.0
130
+ conda-package-handling==2.2.0
131
+ conda==23.7.4
132
+ conda_package_streaming==0.9.0
133
+ confection==0.1.4
134
+ contextily==1.6.0
135
+ contourpy==1.2.0
136
+ contourpy==1.2.1
137
+ convertdate==2.4.0
138
+ crcmod==1.7
139
+ cryptography==41.0.7
140
+ cuda-python==12.4.0
141
+ cudf==23.8.0
142
+ cufflinks==0.17.3
143
+ cuml==23.8.0
144
+ cupy==13.0.0
145
+ cycler==0.12.1
146
+ cymem==2.0.8
147
+ cytoolz==0.12.3
148
+ daal4py==2024.3.0
149
+ daal==2024.3.0
150
+ dacite==1.8.1
151
+ dask-cuda==23.8.0
152
+ dask-cudf==23.8.0
153
+ dask-expr==1.0.11
154
+ dask==2024.4.1
155
+ dataclasses-json==0.6.4
156
+ dataproc_jupyter_plugin==0.1.66
157
+ datasets==2.18.0
158
+ datashader==0.16.0
159
+ datatile==1.0.3
160
+ db-dtypes==1.2.0
161
+ deap==1.4.1
162
+ debugpy==1.8.0
163
+ decorator==5.1.1
164
+ deepdiff==7.0.1
165
+ defusedxml==0.7.1
166
+ deprecation==2.1.0
167
+ descartes==1.1.0
168
+ dill==0.3.8
169
+ dipy==1.9.0
170
+ distlib==0.3.8
171
+ distributed==2023.7.1
172
+ distro==1.9.0
173
+ dm-tree==0.1.8
174
+ docker-pycreds==0.4.0
175
+ docker==7.0.0
176
+ docopt==0.6.2
177
+ docstring-parser==0.15
178
+ docstring-to-markdown==0.15
179
+ docutils==0.21.1
180
+ earthengine-api==0.1.399
181
+ easydict==1.13
182
+ easyocr==1.7.1
183
+ ecos==2.0.13
184
+ eli5==0.13.0
185
+ emoji==2.11.0
186
+ en-core-web-lg==3.7.1
187
+ en-core-web-sm==3.7.1
188
+ entrypoints==0.4
189
+ ephem==4.1.5
190
+ esda==2.5.1
191
+ essentia==2.1b6.dev1110
192
+ et-xmlfile==1.1.0
193
+ etils==1.6.0
194
+ exceptiongroup==1.2.0
195
+ executing==2.0.1
196
+ explainable-ai-sdk==1.3.3
197
+ fastai==2.7.14
198
+ fastapi==0.108.0
199
+ fastavro==1.9.3
200
+ fastcore==1.5.29
201
+ fastdownload==0.0.7
202
+ fasteners==0.19
203
+ fastjsonschema==2.19.1
204
+ fastprogress==1.0.3
205
+ fastrlock==0.8.2
206
+ fasttext==0.9.2
207
+ feather-format==0.4.1
208
+ featuretools==1.30.0
209
+ filelock==3.13.1
210
+ fiona==1.9.6
211
+ fitter==1.7.0
212
+ flake8==7.0.0
213
+ flashtext==2.7
214
+ flatbuffers==23.5.26
215
+ flax==0.8.2
216
+ folium==0.16.0
217
+ fonttools==4.47.0
218
+ fonttools==4.51.0
219
+ fqdn==1.5.1
220
+ frozendict==2.4.2
221
+ frozenlist==1.4.1
222
+ fsspec==2024.2.0
223
+ fsspec==2024.3.1
224
+ funcy==2.0
225
+ fury==0.10.0
226
+ future==1.0.0
227
+ fuzzywuzzy==0.18.0
228
+ gast==0.5.4
229
+ gatspy==0.3
230
+ gcsfs==2024.2.0
231
+ gensim==4.3.2
232
+ geographiclib==2.0
233
+ geojson==3.1.0
234
+ geopandas==0.14.3
235
+ geoplot==0.5.1
236
+ geopy==2.4.1
237
+ geoviews==1.12.0
238
+ ggplot==0.11.5
239
+ giddy==2.3.5
240
+ gitdb==4.0.11
241
+ google-ai-generativelanguage==0.6.2
242
+ google-api-core==2.11.1
243
+ google-api-core==2.18.0
244
+ google-api-python-client==2.126.0
245
+ google-apitools==0.5.31
246
+ google-auth-httplib2==0.2.0
247
+ google-auth-oauthlib==1.2.0
248
+ google-auth==2.26.1
249
+ google-cloud-aiplatform==0.6.0a1
250
+ google-cloud-artifact-registry==1.10.0
251
+ google-cloud-automl==1.0.1
252
+ google-cloud-bigquery==2.34.4
253
+ google-cloud-bigtable==1.7.3
254
+ google-cloud-core==2.4.1
255
+ google-cloud-datastore==2.19.0
256
+ google-cloud-dlp==3.14.0
257
+ google-cloud-jupyter-config==0.0.5
258
+ google-cloud-language==2.13.3
259
+ google-cloud-monitoring==2.18.0
260
+ google-cloud-pubsub==2.19.0
261
+ google-cloud-pubsublite==1.9.0
262
+ google-cloud-recommendations-ai==0.7.1
263
+ google-cloud-resource-manager==1.11.0
264
+ google-cloud-spanner==3.40.1
265
+ google-cloud-storage==1.44.0
266
+ google-cloud-translate==3.12.1
267
+ google-cloud-videointelligence==2.13.3
268
+ google-cloud-vision==2.8.0
269
+ google-crc32c==1.5.0
270
+ google-generativeai==0.5.1
271
+ google-pasta==0.2.0
272
+ google-resumable-media==2.7.0
273
+ googleapis-common-protos==1.62.0
274
+ gplearn==0.4.2
275
+ gpustat==1.0.0
276
+ gpxpy==1.6.2
277
+ graphviz==0.20.3
278
+ greenlet==3.0.3
279
+ grpc-google-iam-v1==0.12.7
280
+ grpcio-status==1.48.1
281
+ grpcio-status==1.48.2
282
+ grpcio==1.51.1
283
+ grpcio==1.60.0
284
+ gviz-api==1.10.0
285
+ gym-notices==0.0.8
286
+ gym==0.26.2
287
+ gymnasium==0.29.0
288
+ h11==0.14.0
289
+ h2o==3.46.0.1
290
+ h5netcdf==1.3.0
291
+ h5py==3.10.0
292
+ haversine==2.8.1
293
+ hdfs==2.7.3
294
+ hep-ml==0.7.2
295
+ hijri-converter==2.3.1
296
+ hmmlearn==0.3.2
297
+ holidays==0.24
298
+ holoviews==1.18.3
299
+ hpsklearn==0.1.0
300
+ html5lib==1.1
301
+ htmlmin==0.1.12
302
+ httpcore==1.0.5
303
+ httplib2==0.21.0
304
+ httptools==0.6.1
305
+ httpx==0.27.0
306
+ huggingface-hub==0.22.2
307
+ hunspell==0.5.5
308
+ hydra-slayer==0.5.0
309
+ hyperopt==0.2.7
310
+ hypertools==0.8.0
311
+ idna==3.6
312
+ igraph==0.11.4
313
+ imagecodecs==2024.1.1
314
+ imageio==2.33.1
315
+ imbalanced-learn==0.12.2
316
+ imgaug==0.4.0
317
+ importlib-metadata==6.11.0
318
+ importlib-metadata==7.0.1
319
+ importlib-resources==6.1.1
320
+ inequality==1.0.1
321
+ iniconfig==2.0.0
322
+ ipydatawidgets==4.3.5
323
+ ipykernel==6.28.0
324
+ ipyleaflet==0.18.2
325
+ ipympl==0.7.0
326
+ ipython-genutils==0.2.0
327
+ ipython-genutils==0.2.0
328
+ ipython-sql==0.5.0
329
+ ipython==8.20.0
330
+ ipyvolume==0.6.3
331
+ ipyvue==1.11.0
332
+ ipyvuetify==1.9.4
333
+ ipywebrtc==0.6.0
334
+ ipywidgets==7.7.1
335
+ isoduration==20.11.0
336
+ isort==5.13.2
337
+ isoweek==1.3.3
338
+ itsdangerous==2.2.0
339
+ jaraco.classes==3.3.0
340
+ jax-jumpy==1.0.0
341
+ jax==0.4.23
342
+ jaxlib==0.4.23.dev20240116
343
+ jedi==0.19.1
344
+ jeepney==0.8.0
345
+ jieba==0.42.1
346
+ jmespath==1.0.1
347
+ joblib==1.4.0
348
+ json5==0.9.14
349
+ jsonpatch==1.33
350
+ jsonpointer==2.4
351
+ jsonschema-specifications==2023.12.1
352
+ jsonschema==4.20.0
353
+ jupyter-console==6.6.3
354
+ jupyter-events==0.9.0
355
+ jupyter-http-over-ws==0.0.8
356
+ jupyter-lsp==1.5.1
357
+ jupyter-server-mathjax==0.2.6
358
+ jupyter-ydoc==0.2.5
359
+ jupyter_client==7.4.9
360
+ jupyter_client==8.6.0
361
+ jupyter_core==5.7.1
362
+ jupyter_server==2.12.5
363
+ jupyter_server_fileid==0.9.1
364
+ jupyter_server_proxy==4.1.0
365
+ jupyter_server_terminals==0.5.1
366
+ jupyter_server_ydoc==0.8.0
367
+ jupyterlab-lsp==5.1.0
368
+ jupyterlab-widgets==3.0.9
369
+ jupyterlab==4.1.6
370
+ jupyterlab_git==0.44.0
371
+ jupyterlab_pygments==0.3.0
372
+ jupyterlab_server==2.25.2
373
+ jupytext==1.16.0
374
+ kaggle-environments==1.14.3
375
+ kaggle==1.6.12
376
+ kagglehub==0.2.3
377
+ keras-cv==0.8.2
378
+ keras-nlp==0.9.3
379
+ keras-tuner==1.4.6
380
+ keras==3.2.1
381
+ kernels-mixer==0.0.7
382
+ keyring==24.3.0
383
+ keyrings.google-artifactregistry-auth==1.1.2
384
+ kfp-pipeline-spec==0.2.2
385
+ kfp-server-api==2.0.5
386
+ kfp==2.5.0
387
+ kiwisolver==1.4.5
388
+ kmapper==2.0.1
389
+ kmodes==0.12.2
390
+ korean-lunar-calendar==0.3.1
391
+ kornia==0.7.2
392
+ kornia_rs==0.1.3
393
+ kt-legacy==1.0.5
394
+ kubernetes==26.1.0
395
+ langcodes==3.3.0
396
+ langid==1.1.6
397
+ lazy_loader==0.3
398
+ learntools==0.3.4
399
+ leven==1.0.4
400
+ libclang==16.0.6
401
+ libmambapy==1.5.0
402
+ libpysal==4.9.2
403
+ librosa==0.10.1
404
+ lightgbm==4.2.0
405
+ lightning-utilities==0.11.2
406
+ lime==0.2.0.1
407
+ line-profiler==4.1.2
408
+ linkify-it-py==2.0.3
409
+ llvmlite==0.41.1
410
+ llvmlite==0.42.0
411
+ lml==0.1.0
412
+ locket==1.0.0
413
+ loguru==0.7.2
414
+ lxml==5.2.1
415
+ lz4==4.3.3
416
+ mamba==1.5.0
417
+ mapclassify==2.6.1
418
+ markdown-it-py==3.0.0
419
+ marshmallow==3.21.1
420
+ matplotlib-inline==0.1.6
421
+ matplotlib-venn==0.11.10
422
+ matplotlib==3.7.5
423
+ matplotlib==3.8.4
424
+ mccabe==0.7.0
425
+ mdit-py-plugins==0.4.0
426
+ mdurl==0.1.2
427
+ memory-profiler==0.61.0
428
+ menuinst==2.0.1
429
+ mercantile==1.2.1
430
+ mgwr==2.2.1
431
+ missingno==0.5.2
432
+ mistune==0.8.4
433
+ mizani==0.11.1
434
+ ml-dtypes==0.2.0
435
+ mlcrate==0.2.0
436
+ mlens==0.2.3
437
+ mlxtend==0.23.1
438
+ mne==1.6.1
439
+ mnist==0.2.2
440
+ momepy==0.7.0
441
+ more-itertools==10.2.0
442
+ mpld3==0.5.10
443
+ mpmath==1.3.0
444
+ msgpack==1.0.7
445
+ multidict==6.0.4
446
+ multimethod==1.10
447
+ multipledispatch==1.0.0
448
+ multiprocess==0.70.16
449
+ munkres==1.1.4
450
+ murmurhash==1.0.10
451
+ mypy-extensions==1.0.0
452
+ namex==0.0.8
453
+ nb-conda-kernels==2.3.1
454
+ nb_conda==2.2.1
455
+ nbclassic==1.0.0
456
+ nbclient==0.5.13
457
+ nbconvert==6.4.5
458
+ nbdime==3.2.0
459
+ nbformat==5.9.2
460
+ ndindex==1.8
461
+ nest-asyncio==1.5.8
462
+ networkx==3.2.1
463
+ nibabel==5.2.1
464
+ nilearn==0.10.4
465
+ ninja==1.11.1.1
466
+ nltk==3.2.4
467
+ nose==1.3.7
468
+ notebook==6.5.4
469
+ notebook==6.5.6
470
+ notebook_executor==0.2
471
+ notebook_shim==0.2.3
472
+ numba==0.58.1
473
+ numba==0.59.1
474
+ numexpr==2.10.0
475
+ numpy==1.26.4
476
+ nvidia-ml-py==11.495.46
477
+ nvtx==0.2.10
478
+ oauth2client==4.1.3
479
+ oauthlib==3.2.2
480
+ objsize==0.6.1
481
+ odfpy==1.4.1
482
+ olefile==0.47
483
+ onnx==1.16.0
484
+ opencensus-context==0.1.3
485
+ opencensus==0.11.4
486
+ opencv-contrib-python==4.9.0.80
487
+ opencv-python-headless==4.9.0.80
488
+ opencv-python==4.9.0.80
489
+ openpyxl==3.1.2
490
+ openslide-python==1.3.1
491
+ opentelemetry-api==1.22.0
492
+ opentelemetry-exporter-otlp-proto-common==1.22.0
493
+ opentelemetry-exporter-otlp-proto-grpc==1.22.0
494
+ opentelemetry-exporter-otlp-proto-http==1.22.0
495
+ opentelemetry-exporter-otlp==1.22.0
496
+ opentelemetry-proto==1.22.0
497
+ opentelemetry-sdk==1.22.0
498
+ opentelemetry-semantic-conventions==0.43b0
499
+ opt-einsum==3.3.0
500
+ optax==0.2.2
501
+ optree==0.11.0
502
+ optuna==3.6.1
503
+ orbax-checkpoint==0.5.9
504
+ ordered-set==4.1.0
505
+ orjson==3.9.10
506
+ ortools==9.4.1874
507
+ osmnx==1.9.2
508
+ overrides==7.4.0
509
+ packaging==21.3
510
+ pandas-datareader==0.10.0
511
+ pandas-profiling==3.6.6
512
+ pandas-summary==0.2.0
513
+ pandas==2.1.4
514
+ pandas==2.2.2
515
+ pandasql==0.7.3
516
+ pandocfilters==1.5.0
517
+ panel==1.4.1
518
+ papermill==2.5.0
519
+ param==2.1.0
520
+ parso==0.8.3
521
+ partd==1.4.1
522
+ path.py==12.5.0
523
+ path==16.14.0
524
+ pathos==0.3.2
525
+ pathy==0.10.3
526
+ patsy==0.5.6
527
+ pdf2image==1.17.0
528
+ peft==0.7.1
529
+ pettingzoo==1.24.0
530
+ pexpect==4.8.0
531
+ pexpect==4.9.0
532
+ phik==0.12.4
533
+ pickleshare==0.7.5
534
+ pillow==10.3.0
535
+ pip==23.3.2
536
+ pkgutil_resolve_name==1.3.10
537
+ platformdirs==4.2.0
538
+ plotly-express==0.4.1
539
+ plotly==5.18.0
540
+ plotnine==0.13.4
541
+ pluggy==1.4.0
542
+ pointpats==2.4.0
543
+ polars==0.20.21
544
+ polyglot==16.7.4
545
+ pooch==1.8.1
546
+ pox==0.3.4
547
+ ppca==0.0.4
548
+ ppft==1.7.6.8
549
+ preprocessing==0.1.13
550
+ preshed==3.0.9
551
+ prettytable==3.9.0
552
+ progressbar2==4.4.2
553
+ prometheus-client==0.19.0
554
+ promise==2.3
555
+ prompt-toolkit==3.0.42
556
+ prompt-toolkit==3.0.43
557
+ prophet==1.1.1
558
+ proto-plus==1.23.0
559
+ protobuf==3.20.3
560
+ protobuf==4.21.12
561
+ psutil==5.9.3
562
+ psutil==5.9.7
563
+ ptyprocess==0.7.0
564
+ pudb==2024.1
565
+ pure-eval==0.2.2
566
+ py-cpuinfo==9.0.0
567
+ py-spy==0.3.14
568
+ py4j==0.10.9.7
569
+ pyLDAvis==3.4.1
570
+ pyOpenSSL==23.3.0
571
+ pyaml==23.12.0
572
+ pyarrow-hotfix==0.6
573
+ pyarrow==15.0.2
574
+ pyasn1-modules==0.3.0
575
+ pyasn1==0.5.1
576
+ pybind11==2.12.0
577
+ pyclipper==1.3.0.post5
578
+ pycodestyle==2.11.1
579
+ pycosat==0.6.6
580
+ pycparser==2.21
581
+ pycryptodome==3.20.0
582
+ pyct==0.5.0
583
+ pycuda==2024.1
584
+ pydantic==2.5.3
585
+ pydantic==2.7.0
586
+ pydantic_core==2.14.6
587
+ pydantic_core==2.18.1
588
+ pydegensac==0.1.2
589
+ pydicom==2.4.4
590
+ pydocstyle==6.3.0
591
+ pydot==1.4.2
592
+ pydub==0.25.1
593
+ pyemd==1.0.0
594
+ pyerfa==2.0.1.4
595
+ pyexcel-io==0.6.6
596
+ pyexcel-ods==0.6.0
597
+ pyflakes==3.2.0
598
+ pygltflib==1.16.2
599
+ pykalman==0.9.7
600
+ pylibraft==23.8.0
601
+ pylint==3.1.0
602
+ pymc3==3.11.4
603
+ pymongo==3.13.0
604
+ pynndescent==0.5.12
605
+ pynvml==11.4.1
606
+ pynvrtc==9.2
607
+ pyparsing==3.1.1
608
+ pyparsing==3.1.2
609
+ pypdf==4.2.0
610
+ pyproj==3.6.1
611
+ pysal==24.1
612
+ pyshp==2.3.1
613
+ pytesseract==0.3.10
614
+ pytest==8.1.1
615
+ python-bidi==0.4.2
616
+ python-dateutil==2.9.0.post0
617
+ python-dotenv==1.0.0
618
+ python-json-logger==2.0.7
619
+ python-louvain==0.16
620
+ python-lsp-jsonrpc==1.1.2
621
+ python-lsp-server==1.11.0
622
+ python-slugify==8.0.4
623
+ python-utils==3.8.2
624
+ pythreejs==2.4.2
625
+ pytoolconfig==1.3.1
626
+ pytools==2024.1.1
627
+ pytorch-ignite==0.5.0.post2
628
+ pytorch-lightning==2.2.2
629
+ pytz==2023.3.post1
630
+ pytz==2024.1
631
+ pyu2f==0.1.5
632
+ pyviz_comms==3.0.2
633
+ pyzmq==24.0.1
634
+ pyzmq==25.1.2
635
+ qgrid==1.3.1
636
+ qtconsole==5.5.1
637
+ quantecon==0.7.2
638
+ qudida==0.0.4
639
+ raft-dask==23.8.0
640
+ rasterio==1.3.10
641
+ rasterstats==0.19.0
642
+ ray-cpp==2.9.0
643
+ ray==2.9.0
644
+ referencing==0.32.1
645
+ regex==2023.12.25
646
+ requests-oauthlib==1.3.1
647
+ requests-toolbelt==0.10.1
648
+ requests==2.31.0
649
+ retrying==1.3.3
650
+ retrying==1.3.4
651
+ rfc3339-validator==0.1.4
652
+ rfc3986-validator==0.1.1
653
+ rgf-python==3.12.0
654
+ rich-click==1.7.4
655
+ rich==13.7.0
656
+ rich==13.7.1
657
+ rmm==23.8.0
658
+ rope==1.13.0
659
+ rpds-py==0.16.2
660
+ rsa==4.9
661
+ ruamel-yaml-conda==0.15.100
662
+ ruamel.yaml.clib==0.2.7
663
+ ruamel.yaml==0.17.40
664
+ s2sphere==0.2.5
665
+ s3fs==2024.2.0
666
+ s3transfer==0.6.2
667
+ safetensors==0.4.3
668
+ scattertext==0.1.19
669
+ scikit-image==0.22.0
670
+ scikit-learn-intelex==2024.3.0
671
+ scikit-learn==1.2.2
672
+ scikit-multilearn==0.2.0
673
+ scikit-optimize==0.10.1
674
+ scikit-plot==0.3.7
675
+ scikit-surprise==1.1.3
676
+ scipy==1.11.4
677
+ scipy==1.13.0
678
+ seaborn==0.12.2
679
+ segment_anything==1.0
680
+ segregation==2.5
681
+ semver==3.0.2
682
+ sentencepiece==0.2.0
683
+ sentry-sdk==1.45.0
684
+ setproctitle==1.3.3
685
+ setuptools-git==1.2
686
+ setuptools-scm==8.0.4
687
+ setuptools==69.0.3
688
+ shap==0.44.1
689
+ shapely==2.0.4
690
+ shellingham==1.5.4
691
+ simpervisor==1.0.0
692
+ simplejson==3.19.2
693
+ six==1.16.0
694
+ sklearn-pandas==2.2.0
695
+ slicer==0.0.7
696
+ smart-open==6.4.0
697
+ smmap==5.0.1
698
+ sniffio==1.3.0
699
+ snowballstemmer==2.2.0
700
+ snuggs==1.4.7
701
+ sortedcontainers==2.4.0
702
+ soundfile==0.12.1
703
+ soupsieve==2.5
704
+ soxr==0.3.7
705
+ spacy-legacy==3.0.12
706
+ spacy-loggers==1.0.5
707
+ spacy==3.7.3
708
+ spaghetti==1.7.5.post1
709
+ spectral==0.23.1
710
+ spglm==1.1.0
711
+ sphinx-rtd-theme==0.2.4
712
+ spint==1.0.7
713
+ splot==1.1.5.post1
714
+ spopt==0.6.0
715
+ spreg==1.4.2
716
+ spvcm==0.3.0
717
+ sqlparse==0.4.4
718
+ squarify==0.4.3
719
+ srsly==2.4.8
720
+ stable-baselines3==2.1.0
721
+ stack-data==0.6.2
722
+ stack-data==0.6.3
723
+ stanio==0.5.0
724
+ starlette==0.32.0.post1
725
+ statsmodels==0.14.1
726
+ stemming==1.0.1
727
+ stop-words==2018.7.23
728
+ stopit==1.1.2
729
+ stumpy==1.12.0
730
+ sympy==1.12
731
+ tables==3.9.2
732
+ tabulate==0.9.0
733
+ tangled-up-in-unicode==0.2.0
734
+ tbb==2021.12.0
735
+ tblib==3.0.0
736
+ tenacity==8.2.3
737
+ tensorboard-data-server==0.7.2
738
+ tensorboard-plugin-profile==2.15.0
739
+ tensorboard==2.15.1
740
+ tensorboardX==2.6.2.2
741
+ tensorflow-cloud==0.1.16
742
+ tensorflow-datasets==4.9.4
743
+ tensorflow-decision-forests==1.8.1
744
+ tensorflow-estimator==2.15.0
745
+ tensorflow-hub==0.16.1
746
+ tensorflow-io-gcs-filesystem==0.35.0
747
+ tensorflow-io==0.35.0
748
+ tensorflow-metadata==0.14.0
749
+ tensorflow-probability==0.23.0
750
+ tensorflow-serving-api==2.14.1
751
+ tensorflow-text==2.15.0
752
+ tensorflow-transform==0.14.0
753
+ tensorflow==2.15.0
754
+ tensorstore==0.1.56
755
+ termcolor==2.4.0
756
+ terminado==0.18.0
757
+ testpath==0.6.0
758
+ text-unidecode==1.3
759
+ textblob==0.18.0.post0
760
+ texttable==1.7.0
761
+ tf_keras==2.15.1
762
+ tfp-nightly==0.24.0.dev0
763
+ thinc==8.2.2
764
+ threadpoolctl==3.2.0
765
+ tifffile==2023.12.9
766
+ timm==0.9.16
767
+ tinycss2==1.2.1
768
+ tobler==0.11.2
769
+ tokenizers==0.15.2
770
+ toml==0.10.2
771
+ tomli==2.0.1
772
+ tomlkit==0.12.4
773
+ toolz==0.12.1
774
+ torch==2.1.2
775
+ torchaudio==2.1.2
776
+ torchdata==0.7.1
777
+ torchinfo==1.8.0
778
+ torchmetrics==1.3.2
779
+ torchtext==0.16.2
780
+ torchvision==0.16.2
781
+ tornado==6.3.3
782
+ tqdm==4.66.1
783
+ traceml==1.0.8
784
+ traitlets==5.9.0
785
+ traittypes==0.2.1
786
+ transformers==4.36.2
787
+ treelite-runtime==3.2.0
788
+ treelite==3.2.0
789
+ truststore==0.8.0
790
+ trx-python==0.2.9
791
+ tsfresh==0.20.2
792
+ typeguard==4.1.5
793
+ typer==0.9.0
794
+ typer==0.9.4
795
+ types-python-dateutil==2.8.19.20240106
796
+ typing-inspect==0.9.0
797
+ typing-utils==0.1.0
798
+ typing_extensions==4.9.0
799
+ tzdata==2023.4
800
+ uc-micro-py==1.0.3
801
+ ucx-py==0.33.0
802
+ ujson==5.9.0
803
+ umap-learn==0.5.6
804
+ unicodedata2==15.1.0
805
+ update-checker==0.18.0
806
+ uri-template==1.3.0
807
+ uritemplate==3.0.1
808
+ urllib3==1.26.18
809
+ urllib3==2.1.0
810
+ urwid==2.6.10
811
+ urwid_readline==0.14
812
+ uvicorn==0.25.0
813
+ uvloop==0.19.0
814
+ vaex-astro==0.9.3
815
+ vaex-core==4.17.1
816
+ vaex-hdf5==0.14.1
817
+ vaex-jupyter==0.8.2
818
+ vaex-ml==0.18.3
819
+ vaex-server==0.9.0
820
+ vaex-viz==0.5.4
821
+ vaex==4.17.0
822
+ vec_noise==1.1.4
823
+ vecstack==0.4.0
824
+ virtualenv==20.21.0
825
+ visions==0.7.5
826
+ vowpalwabbit==9.9.0
827
+ vtk==9.3.0
828
+ wandb==0.16.6
829
+ wasabi==1.1.2
830
+ watchfiles==0.21.0
831
+ wavio==0.0.8
832
+ wcwidth==0.2.13
833
+ weasel==0.3.4
834
+ webcolors==1.13
835
+ webencodings==0.5.1
836
+ websocket-client==1.7.0
837
+ websockets==12.0
838
+ wfdb==4.1.2
839
+ whatthepatch==1.0.5
840
+ wheel==0.42.0
841
+ widgetsnbextension==3.6.6
842
+ witwidget==1.8.1
843
+ woodwork==0.30.0
844
+ wordcloud==1.9.3
845
+ wordsegment==1.3.1
846
+ wrapt==1.14.1
847
+ xarray-einstats==0.7.0
848
+ xarray==2024.3.0
849
+ xgboost==2.0.3
850
+ xvfbwrapper==0.2.9
851
+ xxhash==3.4.1
852
+ xyzservices==2024.4.0
853
+ y-py==0.6.2
854
+ yapf==0.40.2
855
+ yarl==1.9.3
856
+ yarl==1.9.4
857
+ ydata-profiling==4.6.4
858
+ yellowbrick==1.5
859
+ ypy-websocket==0.8.4
860
+ zict==3.0.0
861
+ zipp==3.17.0
862
+ zstandard==0.22.0
rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc513ee60350b588563245f8e235409a1c540bf4e2255635f10584f28a56acaf
3
+ size 14244
scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:77718530cc644ac9df1c886bd6366d3262d99a9cbd3ebc77eed933224f4b03df
3
+ size 1064
trainer_state.json ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": 0.08091000467538834,
3
+ "best_model_checkpoint": "/kaggle/working/trainer/checkpoint-472",
4
+ "epoch": 4.0,
5
+ "eval_steps": 500,
6
+ "global_step": 472,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 1.0,
13
+ "learning_rate": 4.388888888888889e-05,
14
+ "loss": 0.7936,
15
+ "step": 118
16
+ },
17
+ {
18
+ "epoch": 1.0,
19
+ "eval_loss": 0.1481190025806427,
20
+ "eval_runtime": 37.3733,
21
+ "eval_samples_per_second": 0.696,
22
+ "eval_steps_per_second": 0.348,
23
+ "step": 118
24
+ },
25
+ {
26
+ "epoch": 2.0,
27
+ "learning_rate": 3.2962962962962964e-05,
28
+ "loss": 0.1048,
29
+ "step": 236
30
+ },
31
+ {
32
+ "epoch": 2.0,
33
+ "eval_loss": 0.1020423024892807,
34
+ "eval_runtime": 37.3813,
35
+ "eval_samples_per_second": 0.696,
36
+ "eval_steps_per_second": 0.348,
37
+ "step": 236
38
+ },
39
+ {
40
+ "epoch": 3.0,
41
+ "learning_rate": 2.2037037037037038e-05,
42
+ "loss": 0.0731,
43
+ "step": 354
44
+ },
45
+ {
46
+ "epoch": 3.0,
47
+ "eval_loss": 0.08479975163936615,
48
+ "eval_runtime": 37.4565,
49
+ "eval_samples_per_second": 0.694,
50
+ "eval_steps_per_second": 0.347,
51
+ "step": 354
52
+ },
53
+ {
54
+ "epoch": 4.0,
55
+ "learning_rate": 1.1111111111111112e-05,
56
+ "loss": 0.0518,
57
+ "step": 472
58
+ },
59
+ {
60
+ "epoch": 4.0,
61
+ "eval_loss": 0.08091000467538834,
62
+ "eval_runtime": 37.3984,
63
+ "eval_samples_per_second": 0.695,
64
+ "eval_steps_per_second": 0.348,
65
+ "step": 472
66
+ }
67
+ ],
68
+ "logging_steps": 500,
69
+ "max_steps": 590,
70
+ "num_input_tokens_seen": 0,
71
+ "num_train_epochs": 5,
72
+ "save_steps": 500,
73
+ "total_flos": 1.748999270993756e+17,
74
+ "train_batch_size": 2,
75
+ "trial_name": null,
76
+ "trial_params": null
77
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05cd2b18a00bd366c2eb1651ba82f068e5dd0988fec2f9720ca54c19c676c666
3
+ size 4728
wandb-metadata.json ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "os": "Linux-5.15.133+-x86_64-with-glibc2.31",
3
+ "python": "3.10.13",
4
+ "heartbeatAt": "2024-04-25T07:06:00.689254",
5
+ "startedAt": "2024-04-25T07:05:59.918645",
6
+ "docker": null,
7
+ "cuda": null,
8
+ "args": [],
9
+ "state": "running",
10
+ "program": "kaggle.ipynb",
11
+ "codePathLocal": null,
12
+ "root": "/kaggle/working",
13
+ "host": "2b6c16279610",
14
+ "username": "root",
15
+ "executable": "/opt/conda/bin/python3.10",
16
+ "cpu_count": 2,
17
+ "cpu_count_logical": 4,
18
+ "cpu_freq": {
19
+ "current": 2000.186,
20
+ "min": 0.0,
21
+ "max": 0.0
22
+ },
23
+ "cpu_freq_per_core": [
24
+ {
25
+ "current": 2000.186,
26
+ "min": 0.0,
27
+ "max": 0.0
28
+ },
29
+ {
30
+ "current": 2000.186,
31
+ "min": 0.0,
32
+ "max": 0.0
33
+ },
34
+ {
35
+ "current": 2000.186,
36
+ "min": 0.0,
37
+ "max": 0.0
38
+ },
39
+ {
40
+ "current": 2000.186,
41
+ "min": 0.0,
42
+ "max": 0.0
43
+ }
44
+ ],
45
+ "disk": {
46
+ "/": {
47
+ "total": 8062.387607574463,
48
+ "used": 5612.996185302734
49
+ }
50
+ },
51
+ "gpu": "Tesla T4",
52
+ "gpu_count": 2,
53
+ "gpu_devices": [
54
+ {
55
+ "name": "Tesla T4",
56
+ "memory_total": 16106127360
57
+ },
58
+ {
59
+ "name": "Tesla T4",
60
+ "memory_total": 16106127360
61
+ }
62
+ ],
63
+ "memory": {
64
+ "total": 31.357559204101562
65
+ }
66
+ }
wandb-summary.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train/loss": 0.0518, "train/learning_rate": 1.1111111111111112e-05, "train/epoch": 4.0, "train/global_step": 472, "_timestamp": 1714044453.3153298, "_runtime": 15693.386371850967, "_step": 7, "eval/loss": 0.08091000467538834, "eval/runtime": 37.3984, "eval/samples_per_second": 0.695, "eval/steps_per_second": 0.348}