Text-to-Speech
Fairseq
English
audio

AttributeError: 'list' object has no attribute 'eval'

#4
by thelou1s - opened

Traceback (most recent call last):
File "tts.py", line 21, in
wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample)
File "/Users/luis/miniconda3/envs/python36/lib/python3.6/site-packages/fairseq/models/text_to_speech/hub_interface.py", line 132, in get_prediction
prediction = generator.generate(model, sample)
File "/Users/luis/miniconda3/envs/python36/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/Users/luis/miniconda3/envs/python36/lib/python3.6/site-packages/fairseq/speech_generator.py", line 132, in generate
model.eval()
AttributeError: 'list' object has no attribute 'eval'

type(model) should return a FastSpeech2 model. It seems model is a list in your case.

Thank for quick reply, It's because I changed some code. otherwise logs showed this error, maybe I should fix it first:


TypeError Traceback (most recent call last)
in ()
11 model = models[0]
12 TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg)
---> 13 generator = task.build_generator(model, cfg)
14
15 text = "Hello, this is a test run."

/usr/local/lib/python3.7/dist-packages/fairseq/tasks/text_to_speech.py in build_generator(self, models, cfg, vocoder, **unused)
149 if vocoder is None:
150 vocoder = self.build_default_vocoder()
--> 151 model = models[0]
152 if getattr(model, "NON_AUTOREGRESSIVE", False):
153 return NonAutoregressiveSpeechGenerator(model, vocoder, self.data_cfg)

TypeError: 'FastSpeech2Model' object is not subscriptable

It's not clear where the error comes from. Please paste all the commands you entered as well as the error message and use markdown syntax especially that for code blocks.

In my experience, I had to change this line:

generator = task.build_generator(model, cfg)

to

generator = task.build_generator(models, cfg)

as well as add the line:

sample['net_input']['src_tokens'] = sample['net_input']['src_tokens'].to(torch.int64)

after

sample = TTSHubInterface.get_model_input(task, text)

to get the example to work.

Sign up or log in to comment