This Llama πŸ¦™ is stored in πŸ‡ͺπŸ‡Ί

Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.

The files in this are stored in EU Region on huggingface, thanks to our new multi-region support.

Downloads last month
36
Safetensors
Model size
6.74B params
Tensor type
F32
Β·
FP16
Β·
Inference Examples
Inference API (serverless) has been turned off for this model.