How to infer with batch

#1
by Rakshith291 - opened

As shown in the example
Passing list of text to g2p not utilizing GPU RAM more than 2 GB .

So, I was wondering- how to completely utilize GPU

SpeechBrain org

Passing a list of text is expected to create a batch behind the scenes. GPU usage will depend on the size of the list.
You can also use the model directly without the high-level GraphemeToPhoneme wrapper to gain more control.

Sign up or log in to comment