Mathstral compiled for Neuron It has been compiled to run on an inf2.24xlarge instance on AWS. Note that while the inf2.24xlarge has 12 cores, this compilation uses 12.

SEQUENCE_LENGTH = 4096

BATCH_SIZE = 4

NUM_CORES = 12

PRECISION = "bf16"

Downloads last month
0
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Collection including nithiyn/mathstral-neuron