|
Loading pytorch-gpu/py3/2.1.1 |
|
Loading requirement: cuda/11.8.0 nccl/2.18.5-1-cuda cudnn/8.7.0.84-cuda |
|
gcc/8.5.0 openmpi/4.1.5-cuda intel-mkl/2020.4 magma/2.7.1-cuda sox/14.4.2 |
|
sparsehash/2.0.3 libjpeg-turbo/2.1.3 ffmpeg/4.4.4 |
|
+ HF_DATASETS_OFFLINE=1 |
|
+ TRANSFORMERS_OFFLINE=1 |
|
+ python3 OnlyGeneralTokenizer.py |
|
|
|
Checking label assignment: |
|
|
|
Domain: Mathematics |
|
Categories: math.DS math.CA |
|
Abstract: we prove an inequality for holder continuous differential forms on compact manifolds in which the in... |
|
|
|
Domain: Computer Science |
|
Categories: cs.NE |
|
Abstract: when looking for a solution deterministic methods have the enormous advantage that they do find glob... |
|
|
|
Domain: Physics |
|
Categories: physics.hist-ph quant-ph |
|
Abstract: maxwells demon was born in and still thrives in modern physics he plays important roles in clarifyin... |
|
|
|
Domain: Chemistry |
|
Categories: nlin.PS |
|
Abstract: the modulational instability of two interacting waves in a nonlocal kerrtype medium is considered an... |
|
|
|
Domain: Statistics |
|
Categories: astro-ph stat.ME |
|
Abstract: the identification of increasingly smaller signal from objects observed with a nonperfect instrument... |
|
|
|
Domain: Biology |
|
Categories: q-bio.MN cond-mat.stat-mech |
|
Abstract: we find that discrete noise of inhibiting signal molecules can greatly delay the extinction of plasm... |
|
/linkhome/rech/genrug01/uft12cr/.local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py:2057: FutureWarning: Calling BertTokenizer.from_pretrained() with the path to a single file or url is deprecated and won |
|
warnings.warn( |
|
|
|
Training with General tokenizer: |
|
Vocabulary size: 30522 |
|
Could not load pretrained weights from /linkhome/rech/genrug01/uft12cr/bert_Model. Starting with random weights. Error: It looks like the config file at |
|
Initialized model with vocabulary size: 30522 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:172: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler( |
|
scaler = amp.GradScaler() |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29464 |
|
Vocab size: 30522 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29536 |
|
Vocab size: 30522 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29536 |
|
Vocab size: 30522 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29536 |
|
Vocab size: 30522 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29513 |
|
Vocab size: 30522 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29413 |
|
Vocab size: 30522 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29237 |
|
Vocab size: 30522 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29586 |
|
Vocab size: 30522 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29221 |
|
Vocab size: 30522 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29514 |
|
Vocab size: 30522 |
|
Epoch 1/3: |
|
Val Accuracy: 0.7306, Val F1: 0.6541 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29602 |
|
Vocab size: 30522 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29374 |
|
Vocab size: 30522 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29601 |
|
Vocab size: 30522 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29464 |
|
Vocab size: 30522 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29535 |
|
Vocab size: 30522 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29464 |
|
Vocab size: 30522 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29602 |
|
Vocab size: 30522 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29454 |
|
Vocab size: 30522 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29280 |
|
Vocab size: 30522 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29417 |
|
Vocab size: 30522 |
|
Epoch 2/3: |
|
Val Accuracy: 0.7961, Val F1: 0.7582 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29299 |
|
Vocab size: 30522 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29577 |
|
Vocab size: 30522 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29536 |
|
Vocab size: 30522 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29451 |
|
Vocab size: 30522 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29454 |
|
Vocab size: 30522 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29532 |
|
Vocab size: 30522 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29413 |
|
Vocab size: 30522 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29586 |
|
Vocab size: 30522 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29280 |
|
Vocab size: 30522 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29494 |
|
Vocab size: 30522 |
|
Epoch 3/3: |
|
Val Accuracy: 0.8204, Val F1: 0.7894 |
|
|
|
Test Results for General tokenizer: |
|
Accuracy: 0.8204 |
|
F1 Score: 0.7893 |
|
AUC-ROC: 0.8693 |
|
|
|
Class distribution in training set: |
|
Class Biology: 439 samples |
|
Class Chemistry: 454 samples |
|
Class Computer Science: 1358 samples |
|
Class Mathematics: 9480 samples |
|
Class Physics: 2733 samples |
|
Class Statistics: 200 samples |
|
|