Batch Inference

#21
by leonardo-avila - opened

Is it possible to use batch inference when instantiating the Text-to-Text Translation (T2TT) model?
The translator.predict() expects the input data to be 'string' type.

I believe not by passing a batch of strings, but yes if you pass a batch of audio arrays, which you can load by following this code: https://github.com/fairinternal/seamless_communication/blob/6bac442c00a72570fec44f91cc3917decd20e8e4/src/seamless_communication/models/inference/translator.py#L205

Sign up or log in to comment