TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class.

#10
by tammm - opened

不知道版大有沒有遇過這種情形:
跑了幾個小時都沒有出現什麼東西 還寫著"UserWarning: TypedStorage is deprecated"

INFO:
| Name | Type | Params

0 | net_g | SynthesizerTrn | 45.2 M
1 | net_d | MultiPeriodDiscriminator | 46.7 M

91.9 M Trainable params
0 Non-trainable params
91.9 M Total params
367.617 Total estimated model params size (MB)
[16:58:57]
| Name | Type | Params

0 | net_g | SynthesizerTrn | 45.2 M
1 | net_d | MultiPeriodDiscriminator | 46.7 M

91.9 M Trainable params
0 Non-trainable params
91.9 M Total params
367.617 Total estimated model params size (MB)
Sanity Checking: 0it [00:00, ?it/s][16:58:57] /usr/local/lib/python3.9/dist-packages/torch/_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.get(instance, owner)()

Sanity Checking DataLoader 0: 0% 0/2 [00:00<?, ?it/s][16:59:04] /usr/local/lib/python3.9/dist-packages/torch/functional.py:641: UserWarning: stft with return_complex=False is deprecated. In a future pytorch release, stft will return complex tensors for all inputs, and return_complex=False will raise an error.
Note: you can still call torch.view_as_real on the complex output to recover the old return format. (Triggered internally at ../aten/src/ATen/native/SpectralOps.cpp:862.)
return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined]

[16:59:09] /usr/local/lib/python3.9/dist-packages/lightning/pytorch/loops/fit_loop.py:280: PossibleUserWarning: The number of training batches (3) is smaller than the logging interval Trainer(log_every_n_steps=50). Set a lower value for log_every_n_steps if you want to see logs for the training epoch.
rank_zero_warn(

Training: 0it [00:00, ?it/s][16:59:09] Setting current epoch to 0
[16:59:09] Setting total batch idx to 0
[16:59:09] Setting global step to 0
Epoch 17: 0% 0/3 [00:00<?, ?it/s, v_num=0, loss/g/total=94.20, loss/g/fm=7.760, loss/g/mel=70.40, loss/g/kl=13.10, loss/g/lf0=0.0224, loss/d/total=1.330]

这些warning是正常的,你的Epoch数也有变动,就是loss/g/mel太高,可能哪里出了问题

zomehwh changed discussion status to closed

Sign up or log in to comment