[Cache Request] mistralai/Mistral-7B-Instruct-v0.3

#86
by xapss - opened

Please add the following model to the neuron cache

AWS Inferentia and Trainium org

This model tokenizer requires a more recent version of transformers than the one currently integrated in optimum-neuron 0.0.22. Stay tuned for the next release.

AWS Inferentia and Trainium org

The model is available with optimum-neuron==0.0.23. It will soon be available for direct deployment from the model card.

dacorvo changed discussion status to closed

Sign up or log in to comment