type error

#4
by KenDoStudio - opened

[2023-07-06 10:08:12] connet sid : X9cbe-myDlqAcFwfAAAe
[2023-07-06 10:08:12] connet sid : sPXA_Vs1j4v21s__AAAf
[Voice Changer] update configuration: modelSlotIndex 1688652494001
[Voice Changer] model slot is changed 0 -> 1
................RVC
[Voice Changer] [RVC] Creating instance
[Voice Changer] [RVC] Initializing...
inferencerTypeinferencerTypeinferencerTypeinferencerType onnxRVC
[Voice Changer] generate new embedder. (no embedder)
[Voice Changer] exception! loading embedder [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 9437184 bytes.
Traceback (most recent call last):
File "voice_changer\RVC\pipeline\PipelineGenerator.py", line 26, in createPipeline
File "voice_changer\RVC\embedder\EmbedderManager.py", line 25, in getEmbedder
File "voice_changer\RVC\embedder\EmbedderManager.py", line 45, in loadEmbedder
File "voice_changer\RVC\embedder\FairseqHubert.py", line 12, in loadModel
File "fairseq\checkpoint_utils.py", line 473, in load_model_ensemble_and_task
model = task.build_model(cfg.model, from_checkpoint=True)
File "fairseq\tasks\fairseq_task.py", line 338, in build_model
model = models.build_model(cfg, self, from_checkpoint)
File "fairseq\models_init_.py", line 106, in build_model
return model.build_model(cfg, task)
File "fairseq\models\hubert\hubert.py", line 335, in build_model
model = HubertModel(cfg, task.cfg, task.dictionaries)
File "fairseq\models\hubert\hubert.py", line 298, in init
self.encoder = TransformerEncoder(cfg)
File "fairseq\models\wav2vec\wav2vec2.py", line 994, in init
[self.build_encoder_layer(args) for _ in range(args.encoder_layers)]
File "fairseq\models\wav2vec\wav2vec2.py", line 994, in
[self.build_encoder_layer(args) for _ in range(args.encoder_layers)]
File "fairseq\models\wav2vec\wav2vec2.py", line 922, in build_encoder_layer
layer = TransformerSentenceEncoderLayer(
File "fairseq\models\wav2vec\wav2vec2.py", line 1216, in init
self.fc1 = nn.Linear(self.embedding_dim, ffn_embedding_dim)
File "torch\nn\modules\linear.py", line 96, in init
self.weight = Parameter(torch.empty((out_features, in_features), **factory_kwargs))
RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 9437184 bytes.
[Voice Changer] Loading index...
[Voice Changer] post_update_settings ex: local variable 'embedder' referenced before assignment
Traceback (most recent call last):
File "restapi\MMVC_Rest_Fileuploader.py", line 73, in post_update_settings
File "voice_changer\VoiceChangerManager.py", line 244, in update_settings
File "voice_changer\VoiceChangerManager.py", line 200, in generateVoiceChanger
File "voice_changer\RVC\RVC.py", line 53, in init
File "voice_changer\RVC\RVC.py", line 59, in initialize
File "voice_changer\RVC\pipeline\PipelineGenerator.py", line 43, in createPipeline
UnboundLocalError: local variable 'embedder' referenced before assignment
---------- REMOVING ---------------

Sign up or log in to comment