GGUFd (f16) https://huggingface.co/yujiepan/llama-2-tiny-random

Download

pip install huggingface-hub

From CLI:

huggingface-cli download \
aladar/llama-2-tiny-random-GGUF \
llama-2-tiny-random.gguf \
--local-dir . \
--local-dir-use-symlinks False
Downloads last month
16
GGUF
Model size
513k params
Architecture
llama
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.