File size: 5,115 Bytes
cd4aacd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
[2023-12-07 01:45:38,733][hydra][INFO] - 
estimator:
  accelerator: gpu
  precision: 32
  deterministic: true
  tf32_mode: high
callbacks:
  timer:
    _target_: energizer.active_learning.callbacks.Timer
  save_outputs:
    _target_: src.callbacks.SaveOutputs
    dirpath: ./logs/
    instance_level: false
    batch_level: false
    epoch_level: false
  early_stopping:
    _target_: energizer.callbacks.early_stopping.EarlyStopping
    monitor: train/avg_f1_minclass
    stage: train
    interval: epoch
    mode: max
    min_delta: 1.0e-05
    patience: 10
    stopping_threshold: null
    divergence_threshold: null
    verbose: true
  model_checkpoint:
    _target_: energizer.callbacks.model_checkpoint.ModelCheckpoint
    dirpath: .checkpoints
    monitor: train/avg_f1_minclass
    stage: train
    mode: max
    save_last: false
    save_top_k: 1
    verbose: true
loggers:
  tensorboard:
    _target_: energizer.loggers.TensorBoardLogger
    root_dir: ./
    name: tb_logs
    version: null
data:
  batch_size: 32
  eval_batch_size: 256
  num_workers: 32
  pin_memory: true
  drop_last: false
  persistent_workers: true
  shuffle: true
  seed: 123456
  replacement: false
  max_length: 512
active_data:
  budget: 100
  positive_budget: 5
  seed: 123456
fit:
  min_steps: 100
  max_epochs: 10
  learning_rate: 4.0e-05
  optimizer: adamw
  log_interval: 1
  enable_progress_bar: false
  limit_train_batches: null
  limit_validation_batches: null
active_fit:
  max_budget: 5000
  query_size: 25
  reinit_model: true
  limit_pool_batches: null
  limit_test_batches: null
test:
  log_interval: 1
  enable_progress_bar: false
  limit_batches: null
strategy:
  name: seals_entropy
  args:
    seed: 42
    subpool_size: 10000
    num_neighbours: 50
    max_search_size: null
model:
  name: deberta_v3-base
  seed: 654321
dataset:
  name: agnews-business-.01
  text_column: text
  label_column: labels
  uid_column: uid
  prepared_path: /rds/user/pl487/hpc-work/anchoral/data/prepared/agnews-business-.01
  processed_path: /rds/user/pl487/hpc-work/anchoral/data/processed/agnews
  minority_classes:
  - 1
index_metric: all-mpnet-base-v2_cosine
log_interval: 1
enable_progress_bar: false
limit_batches: null
seed: 42
experiment_group: deberta_v3-base/new
run_name: agnews-business-.01/deberta_v3-base_seals_entropy_2023-12-06T10-52-24
data_path: /rds/user/pl487/hpc-work/anchoral/data

======================================================================
[2023-12-07 01:45:38,755][hydra][INFO] - Running active learning with strategy {'name': 'seals_entropy', 'args': {'seed': 42, 'subpool_size': 10000, 'num_neighbours': 50, 'max_search_size': None}}
[2023-12-07 01:45:38,773][hydra][INFO] - Seed enabled: 42
[2023-12-07 01:45:39,852][hydra][INFO] - loading index from /rds/user/pl487/hpc-work/anchoral/data/processed/agnews/all-mpnet-base-v2_cosine
[2023-12-07 01:45:41,634][hydra][INFO] - Labelled size: 100 Pool size: 90810 Test size: 7600
Label distribution:
|    | labels   |   count |   perc |
|---:|:---------|--------:|-------:|
|  0 | Negative |      95 |   0.95 |
|  1 | Positive |       5 |   0.05 |
[2023-12-07 01:45:41,650][hydra][INFO] - Batch:
{<InputKeys.INPUT_IDS: 'input_ids'>: tensor([[    1,  1864,   294, 41142,  2729,  2152, 19979, 53884,  5147,  1050,
           279,  7439,  2298,   927,  7710, 70907,     2]]), <InputKeys.ATT_MASK: 'attention_mask'>: tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]), <InputKeys.LABELS: 'labels'>: tensor([0]), <InputKeys.ON_CPU: 'on_cpu'>: {<SpecialKeys.ID: 'uid'>: [122709]}}
[2023-12-07 01:45:43,524][hydra][INFO] - Loggers: {'tensorboard': <energizer.loggers.tensorboard.TensorBoardLogger object at 0x151f0e916400>}
[2023-12-07 01:45:43,524][hydra][INFO] - Callbacks: {'timer': <energizer.active_learning.callbacks.Timer object at 0x151efe6f8d60>, 'save_outputs': <src.callbacks.SaveOutputs object at 0x151efe6c8130>, 'early_stopping': <energizer.callbacks.early_stopping.EarlyStopping object at 0x151efe6c87c0>, 'model_checkpoint': <energizer.callbacks.model_checkpoint.ModelCheckpoint object at 0x151efe6c80d0>}
[2023-12-07 01:45:43,547][hydra][INFO] - 
  | Name       | Type           | Params
----------------------------------------------
0 | deberta    | DebertaV2Model | 183 M 
1 | pooler     | ContextPooler  | 590 K 
2 | classifier | Linear         | 1.5 K 
3 | dropout    | StableDropout  | 0     
----------------------------------------------
184 M     Trainable params
0         Non-trainable params
184 M     Total params
737.695   Total estimated model params size (MB)
0.00 GB   CUDA Memory used
[2023-12-07 07:42:45,947][submitit][INFO] - Job has timed out. Ran 357 minutes out of requested 360 minutes.
[2023-12-07 07:42:45,981][submitit][WARNING] - Caught signal SIGUSR2 on gpu-q-77: this job is timed-out.
[2023-12-07 07:42:45,989][submitit][INFO] - Calling checkpoint method.
[2023-12-07 07:42:46,014][submitit][INFO] - Job not requeued because: timed-out too many times.
[2023-12-07 07:42:46,014][submitit][WARNING] - Bypassing signal SIGCONT
[2023-12-07 07:42:46,018][submitit][INFO] - Job completed successfully