Llama-3-11.5B-v2
Thank you to Meta for the weights for Meta-Llama-3-8B
This is an upscaling of the Meta-Llama-3-8B Ai using techniques created for chargoddard/mistral-11b-slimorca. This Ai model has been upscaled from 8b parameters to 11.5b parameters without any continuous pretraining or fine-tuning.
Unlike version 1 this model has no issues at fp16 or any quantizations.
The model that was used to create this one is linked below:
- Downloads last month
- 337
This model does not have enough activity to be deployed to Inference API (serverless) yet.
Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.