File size: 1,126 Bytes
31cffb5 c8c2e9e 9ef25ca 12164e4 9ef25ca d958952 31cffb5 cd2a57c 5960e84 9ef25ca 5960e84 9ef25ca 5960e84 9ef25ca 5960e84 9ef25ca c8c2e9e 9ef25ca 5960e84 9ef25ca 5960e84 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
---
base_model: []
license: other
library_name: transformers
license_name: llama-3
license_link: https://llama.meta.com/llama3/license/
---
This is the first version of upscaling llama-3. Version 2 is now out and does not have any of the issues that this version has. Please use version 2 instead. Linked bellow:
- https://huggingface.co/Replete-AI/Llama-3-11.5B-Instruct-v2
__________________________________________________________________
Llama-3-13B-Instruct
Thank you to Meta for the weights for Meta-Llama-3-8B-Instruct
![image/png](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/aJJxKus1wP5N-euvHEUq7.png)
This is an upscaling of the Meta-Llama-3-8B-Instruct Ai using techniques created for Mistral-Evolved-11b-v0.1. This Ai model has been upscaled from 8b parameters to 13b parameters without any continuous pretraining or fine-tuning.
From testing, the model seems to function perfectly at fp16, but has some issues at 4-bit quantization using bitsandbytes.
The model that was used to create this one is linked below:
https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct |