Working inside COG ?
Did anybody get this model working inside COG container ?
I'm getting this error and not sure why :/
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/uvicorn/protocols/http/httptools_impl.py", line 372, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/usr/local/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 75, in __call__
return await self.app(scope, receive, send)
File "/usr/local/lib/python3.8/site-packages/fastapi/applications.py", line 269, in __call__
await super().__call__(scope, receive, send)
File "/usr/local/lib/python3.8/site-packages/starlette/applications.py", line 124, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 93, in __call__
raise exc
File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in __call__
await self.app(scope, receive, sender)
File "/usr/local/lib/python3.8/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/usr/local/lib/python3.8/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 670, in __call__
await route.handle(scope, receive, send)
File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 266, in handle
await self.app(scope, receive, send)
File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 65, in app
response = await func(request)
File "/usr/local/lib/python3.8/site-packages/fastapi/routing.py", line 227, in app
raw_response = await run_endpoint_function(
File "/usr/local/lib/python3.8/site-packages/fastapi/routing.py", line 162, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
File "/usr/local/lib/python3.8/site-packages/starlette/concurrency.py", line 41, in run_in_threadpool
return await anyio.to_thread.run_sync(func, *args)
File "/usr/local/lib/python3.8/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/usr/local/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.8/site-packages/cog/server/http.py", line 64, in predict
output = predictor.predict(**request.input.dict())
File "predict.py", line 17, in predict
self.generator = self.task.build_generator(self.model, self.cfg)
File "/fairseq/fairseq/tasks/text_to_speech.py", line 151, in build_generator
model = models[0]
TypeError: 'FastSpeech2Model' object is not subscriptable
What is a COG container? Also pinging @anton-l here
@patrickvonplaten ah sorry, its a docker utility/framework for ML / AI.. https://github.com/replicate/cog basically you can define one yml with prerequisites, and it does most of the stuff for you, like API
@Vlado
which farseq
version do you use? The pip release hasn't been updated in quite a while, so you may need to install it from source: https://github.com/facebookresearch/fairseq#requirements-and-installation
@anton-l
I'm using the latest from git.
My cog.yml looks like this:
build:
gpu: false
python_version: "3.8"
python_packages:
- torch==1.11.0
- huggingface-hub==0.7.0
run:
- git clone https://github.com/pytorch/fairseq && cd fairseq && pip install --editable ./
and the predictor file is just this:
import os
os.environ['HF_HOME'] = '/src/cache'
from cog import BasePredictor, Path, Input
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.text_to_speech.hub_interface import TTSHubInterface
class Predictor(BasePredictor):
def predict(self, text: str = Input(description="Sentence to speak out")) -> str:
self.models, self.cfg, self.task = load_model_ensemble_and_task_from_hf_hub("facebook/fastspeech2-en-ljspeech", arg_overrides={"vocoder": "hifigan", "fp16": False})
self.model = self.models[0] <--- THIS PART FAILS
TTSHubInterface.update_cfg_with_data_cfg(self.cfg, self.task.data_cfg)
self.generator = self.task.build_generator(self.model, self.cfg)
self.sample = TTSHubInterface.get_model_input(self.task, self.text)
self.wav, self.rate = TTSHubInterface.get_prediction(self.task, self.model, self.generator, self.sample)
return self.wav, self.rate
Maybe I'm doing it wrong :)
I meet the same problem
model = models[0]
TypeError: 'FastSpeech2Model' object does not support indexing
@Vlado which
farseq
version do you use? The pip release hasn't been updated in quite a while, so you may need to install it from source: https://github.com/facebookresearch/fairseq#requirements-and-installation
File "test_for_transformer.py", line 12, in
generator = task.build_generator(model, cfg)
File "/data/AI-Capability/fairseq/fairseq/tasks/text_to_speech.py", line 151, in build_generator
model = models[0]
TypeError: 'FastSpeech2Model' object is not subscriptable
Extactly the same question.
Try this:
model = models
TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg)
generator = task.build_generator(model, cfg)
text = "Hello, this is a test run."
sample = TTSHubInterface.get_model_input(task, text)
wav, rate = TTSHubInterface.get_prediction(task, model[0], generator, sample)
move model[0] to the predict line. But I am not sure the result is correct.
@vlado @JuliaQuQu I edited your comments to use backticks around code blocks to improve legibility