Undi95 type frankenstein of TinyLLama 1.1b https://github.com/jzhang38/TinyLlama https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0

GGUF custom quants included

The secret sauce:

slices:
  - sources:
    - model: "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
      layer_range: [0, 14]
  - sources:
    - model: "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
      layer_range: [8, 22]
merge_method: passthrough
dtype: bfloat16

How to run as gguf:

git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make -j
wget https://huggingface.co/SkunkworksAI/tinyfrank-1.4B/resolve/main/tinyfrank-q6L.gguf
./server -m tinyfrank-q6L.gguf --host "my.internal.ip.or.my.cloud.host.name.goes.here.com" -c 512
Downloads last month
287
Safetensors
Model size
1.36B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Spaces using SkunkworksAI/tinyfrank-1.4B 6