diff --git "a/stderr-2e-05.slurm" "b/stderr-2e-05.slurm" new file mode 100644--- /dev/null +++ "b/stderr-2e-05.slurm" @@ -0,0 +1,45753 @@ +WARNING:root:Dropping 0 rows +Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation. +/baie/nfs-cluster-1/data1/raid1/homedirs/eliot.maes/multimodal-itmodels/experiments/./ES_corlec is already a clone of https://huggingface.co/maesneako/ES_corlec. Make sure you pull the latest changes with `repo.git_pull()`. +WARNING:huggingface_hub.repository:/baie/nfs-cluster-1/data1/raid1/homedirs/eliot.maes/multimodal-itmodels/experiments/./ES_corlec is already a clone of https://huggingface.co/maesneako/ES_corlec. Make sure you pull the latest changes with `repo.git_pull()`. +The following columns in the training set don't have a corresponding argument in `GPT2LMHeadModel.forward` and have been ignored: index, text_input_ids, text_input_ids_full, start_idx, text_u, speaker, text, length, text_u_full, __index_level_0__, file. If index, text_input_ids, text_input_ids_full, start_idx, text_u, speaker, text, length, text_u_full, __index_level_0__, file are not expected by `GPT2LMHeadModel.forward`, you can safely ignore this message. +/baie/nfs-cluster-1/data1/raid1/homedirs/eliot.maes/env/lib/python3.6/site-packages/transformers/optimization.py:309: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning + FutureWarning, +***** Running training ***** + Num examples = 80691 + Num Epochs = 10 + Instantaneous batch size per device = 16 + Total train batch size (w. parallel, distributed & accumulation) = 16 + Gradient Accumulation steps = 1 + Total optimization steps = 50440 + 0%| | 0/50440 [00:00