You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Traceback (most recent call last):
File "/mnt/c/工作/code/envs/pytorch/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 759, in convert_to_tensors
tensor = as_tensor(value)
File "/mnt/c/工作/code/envs/pytorch/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 721, in as_tensor
return torch.tensor(value)
RuntimeError: Could not infer dtype of NoneType
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "inference.py", line 13, in
inputs = tokenizer(text, padding=True, truncation=True, return_tensors="pt")
File "/mnt/c/工作/code/envs/pytorch/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2883, in call
encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
File "/mnt/c/工作/code/envs/pytorch/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2989, in _call_one
return self.encode_plus(
File "/mnt/c/工作/code/envs/pytorch/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 3062, in encode_plus
return self._encode_plus(
File "/mnt/c/工作/code/envs/pytorch/lib/python3.8/site-packages/transformers/tokenization_utils.py", line 722, in _encode_plus
return self.prepare_for_model(
File "/mnt/c/工作/code/envs/pytorch/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 3552, in prepare_for_model
batch_outputs = BatchEncoding(
File "/mnt/c/工作/code/envs/pytorch/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 224, in init
self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis)
File "/mnt/c/工作/code/envs/pytorch/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 775, in convert_to_tensors
raise ValueError(
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (input_ids
in this case) have excessive nesting (inputs type list
where type int
is expected).
My transformers version:4.41.2
My code in an offline environment and it is as follows:
from transformers import VitsModel, AutoTokenizer
import torch
import scipy
from transformers import logging
logging.set_verbosity_error()
model = VitsModel.from_pretrained("/mnt/c/工作/code/workspace/mms_tts_model/mms_tts")
tokenizer = AutoTokenizer.from_pretrained("/mnt/c/工作/code/workspace/mms_tts_model/mms_tts")
text = "استىنلا بەك ئاۋارىچىلىق"
inputs = tokenizer(text, padding=True, truncation=True, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
Save the child!!!Thanks!!!
111
Save the child!!!Thanks!!!
111