[Cache Request] facebook/seamless-m4t-v2-large

#13
by aitransync - opened

Please add the following model to the neuron cache

AWS Inferentia and Trainium org

The inference cache is only available for causal lm models for now. cc @Jingya

AWS Inferentia and Trainium org
edited Mar 13

We do not have seamless-m4t-v2 support yet, not even in optimum main, so we will need to add support for its export and inference first. Besides regarding the size of this model, we might need tp support for it as well...

Sign up or log in to comment