Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

llama-160m - GGUF

Name Quant method Size
llama-160m.Q2_K.gguf Q2_K 0.07GB
llama-160m.IQ3_XS.gguf IQ3_XS 0.07GB
llama-160m.IQ3_S.gguf IQ3_S 0.07GB
llama-160m.Q3_K_S.gguf Q3_K_S 0.07GB
llama-160m.IQ3_M.gguf IQ3_M 0.08GB
llama-160m.Q3_K.gguf Q3_K 0.08GB
llama-160m.Q3_K_M.gguf Q3_K_M 0.08GB
llama-160m.Q3_K_L.gguf Q3_K_L 0.08GB
llama-160m.IQ4_XS.gguf IQ4_XS 0.09GB
llama-160m.Q4_0.gguf Q4_0 0.09GB
llama-160m.IQ4_NL.gguf IQ4_NL 0.09GB
llama-160m.Q4_K_S.gguf Q4_K_S 0.09GB
llama-160m.Q4_K.gguf Q4_K 0.1GB
llama-160m.Q4_K_M.gguf Q4_K_M 0.1GB
llama-160m.Q4_1.gguf Q4_1 0.1GB
llama-160m.Q5_0.gguf Q5_0 0.11GB
llama-160m.Q5_K_S.gguf Q5_K_S 0.11GB
llama-160m.Q5_K.gguf Q5_K 0.11GB
llama-160m.Q5_K_M.gguf Q5_K_M 0.11GB
llama-160m.Q5_1.gguf Q5_1 0.12GB
llama-160m.Q6_K.gguf Q6_K 0.12GB

Original model description:

license: apache-2.0 language: - en datasets: - wikipedia pipeline_tag: text-generation

Model description

This is a LLaMA-like model with only 160M parameters trained on Wikipedia and part of the C4-en and C4-realnewslike datasets.

No evaluation has been conducted yet, so use it with care.

The model is mainly developed as a base Small Speculative Model in the SpecInfer paper.

Citation

To cite the model, please use

@misc{miao2023specinfer,
      title={SpecInfer: Accelerating Generative LLM Serving with Speculative Inference and Token Tree Verification}, 
      author={Xupeng Miao and Gabriele Oliaro and Zhihao Zhang and Xinhao Cheng and Zeyu Wang and Rae Ying Yee Wong and Zhuoming Chen and Daiyaan Arfeen and Reyna Abhyankar and Zhihao Jia},
      year={2023},
      eprint={2305.09781},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
20
GGUF
Model size
162M params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

Inference API
Unable to determine this model's library. Check the docs .