Quantization made by Richard Erkhov.
llama-2-tiny-random - GGUF
- Model creator: https://huggingface.co/yujiepan/
- Original model: https://huggingface.co/yujiepan/llama-2-tiny-random/
Name | Quant method | Size |
---|---|---|
llama-2-tiny-random.Q2_K.gguf | Q2_K | 0.0GB |
llama-2-tiny-random.IQ3_XS.gguf | IQ3_XS | 0.0GB |
llama-2-tiny-random.IQ3_S.gguf | IQ3_S | 0.0GB |
llama-2-tiny-random.Q3_K_S.gguf | Q3_K_S | 0.0GB |
llama-2-tiny-random.IQ3_M.gguf | IQ3_M | 0.0GB |
llama-2-tiny-random.Q3_K.gguf | Q3_K | 0.0GB |
llama-2-tiny-random.Q3_K_M.gguf | Q3_K_M | 0.0GB |
llama-2-tiny-random.Q3_K_L.gguf | Q3_K_L | 0.0GB |
llama-2-tiny-random.IQ4_XS.gguf | IQ4_XS | 0.0GB |
llama-2-tiny-random.Q4_0.gguf | Q4_0 | 0.0GB |
llama-2-tiny-random.IQ4_NL.gguf | IQ4_NL | 0.0GB |
llama-2-tiny-random.Q4_K_S.gguf | Q4_K_S | 0.0GB |
llama-2-tiny-random.Q4_K.gguf | Q4_K | 0.0GB |
llama-2-tiny-random.Q4_K_M.gguf | Q4_K_M | 0.0GB |
llama-2-tiny-random.Q4_1.gguf | Q4_1 | 0.0GB |
llama-2-tiny-random.Q5_0.gguf | Q5_0 | 0.0GB |
llama-2-tiny-random.Q5_K_S.gguf | Q5_K_S | 0.0GB |
llama-2-tiny-random.Q5_K.gguf | Q5_K | 0.0GB |
llama-2-tiny-random.Q5_K_M.gguf | Q5_K_M | 0.0GB |
llama-2-tiny-random.Q5_1.gguf | Q5_1 | 0.0GB |
llama-2-tiny-random.Q6_K.gguf | Q6_K | 0.0GB |
Original model description:
library_name: transformers pipeline_tag: text-generation inference: true widget: - text: Hello! example_title: Hello world group: Python
yujiepan/llama-2-tiny-random
This model is randomly initialized, using the config from meta-llama/Llama-2-7b-chat-hf but with the following modifications:
{
"hidden_size": 8,
"intermediate_size": 32,
"num_attention_heads": 2,
"num_hidden_layers": 1,
"num_key_value_heads": 2,
}