File size: 3,502 Bytes
5e5d1f5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35

  0%|                                                                                                                                                                                    | 0/3000 [00:00<?, ?it/s]Traceback (most recent call last):
  File "/home/sagrilaft/Project/audio/xls-r-et/src/run_speech_recognition_ctc_bnb.py", line 760, in <module>
    main()
  File "/home/sagrilaft/Project/audio/xls-r-et/src/run_speech_recognition_ctc_bnb.py", line 711, in main
    train_result = trainer.train(resume_from_checkpoint=checkpoint)
  File "/home/sagrilaft/Project/audio/xls-r-et/.venv/lib/python3.9/site-packages/transformers/trainer.py", line 1365, in train
    tr_loss_step = self.training_step(model, inputs)
  File "/home/sagrilaft/Project/audio/xls-r-et/.venv/lib/python3.9/site-packages/transformers/trainer.py", line 1940, in training_step
    loss = self.compute_loss(model, inputs)
  File "/home/sagrilaft/Project/audio/xls-r-et/.venv/lib/python3.9/site-packages/transformers/trainer.py", line 1972, in compute_loss
    outputs = model(**inputs)
  File "/home/sagrilaft/Project/audio/xls-r-et/.venv/lib64/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/sagrilaft/Project/audio/xls-r-et/.venv/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1720, in forward
    outputs = self.wav2vec2(
  File "/home/sagrilaft/Project/audio/xls-r-et/.venv/lib64/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/sagrilaft/Project/audio/xls-r-et/.venv/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1313, in forward
    extract_features = self.feature_extractor(input_values)
  File "/home/sagrilaft/Project/audio/xls-r-et/.venv/lib64/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/sagrilaft/Project/audio/xls-r-et/.venv/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 482, in forward
    hidden_states = conv_layer(hidden_states)
  File "/home/sagrilaft/Project/audio/xls-r-et/.venv/lib64/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/sagrilaft/Project/audio/xls-r-et/.venv/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 357, in forward
    hidden_states = self.layer_norm(hidden_states)
  File "/home/sagrilaft/Project/audio/xls-r-et/.venv/lib64/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/sagrilaft/Project/audio/xls-r-et/.venv/lib64/python3.9/site-packages/torch/nn/modules/normalization.py", line 189, in forward
    return F.layer_norm(
  File "/home/sagrilaft/Project/audio/xls-r-et/.venv/lib64/python3.9/site-packages/torch/nn/functional.py", line 2347, in layer_norm
    return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: CUDA out of memory. Tried to allocate 4.18 GiB (GPU 0; 31.75 GiB total capacity; 3.35 GiB already allocated; 1.49 GiB free; 3.35 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF