afrideva commited on
Commit
ffce8ef
1 Parent(s): 618a7c6

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +60 -0
README.md ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: JackFram/llama-160m
3
+ datasets:
4
+ - wikipedia
5
+ inference: false
6
+ language:
7
+ - en
8
+ license: other
9
+ model_creator: JackFram
10
+ model_name: llama-160m
11
+ pipeline_tag: text-generation
12
+ quantized_by: afrideva
13
+ tags:
14
+ - gguf
15
+ - ggml
16
+ - quantized
17
+ - q2_k
18
+ - q3_k_m
19
+ - q4_k_m
20
+ - q5_k_m
21
+ - q6_k
22
+ - q8_0
23
+ ---
24
+ # JackFram/llama-160m-GGUF
25
+
26
+ Quantized GGUF model files for [llama-160m](https://huggingface.co/JackFram/llama-160m) from [JackFram](https://huggingface.co/JackFram)
27
+
28
+
29
+ | Name | Quant method | Size |
30
+ | ---- | ---- | ---- |
31
+ | [llama-160m.fp16.gguf](https://huggingface.co/afrideva/llama-160m-GGUF/resolve/main/llama-160m.fp16.gguf) | fp16 | 326.58 MB |
32
+ | [llama-160m.q2_k.gguf](https://huggingface.co/afrideva/llama-160m-GGUF/resolve/main/llama-160m.q2_k.gguf) | q2_k | 77.23 MB |
33
+ | [llama-160m.q3_k_m.gguf](https://huggingface.co/afrideva/llama-160m-GGUF/resolve/main/llama-160m.q3_k_m.gguf) | q3_k_m | 87.54 MB |
34
+ | [llama-160m.q4_k_m.gguf](https://huggingface.co/afrideva/llama-160m-GGUF/resolve/main/llama-160m.q4_k_m.gguf) | q4_k_m | 104.03 MB |
35
+ | [llama-160m.q5_k_m.gguf](https://huggingface.co/afrideva/llama-160m-GGUF/resolve/main/llama-160m.q5_k_m.gguf) | q5_k_m | 119.04 MB |
36
+ | [llama-160m.q6_k.gguf](https://huggingface.co/afrideva/llama-160m-GGUF/resolve/main/llama-160m.q6_k.gguf) | q6_k | 135.00 MB |
37
+ | [llama-160m.q8_0.gguf](https://huggingface.co/afrideva/llama-160m-GGUF/resolve/main/llama-160m.q8_0.gguf) | q8_0 | 174.33 MB |
38
+
39
+
40
+
41
+ ## Original Model Card:
42
+ ## Model description
43
+ This is a LLaMA-like model with only 160M parameters trained on Wikipedia and part of the C4-en and C4-realnewslike datasets.
44
+
45
+ No evaluation has been conducted yet, so use it with care.
46
+
47
+ The model is mainly developed as a base Small Speculative Model in the [SpecInfer](https://arxiv.org/abs/2305.09781) paper.
48
+
49
+ ## Citation
50
+ To cite the model, please use
51
+ ```bibtex
52
+ @misc{miao2023specinfer,
53
+ title={SpecInfer: Accelerating Generative LLM Serving with Speculative Inference and Token Tree Verification},
54
+ author={Xupeng Miao and Gabriele Oliaro and Zhihao Zhang and Xinhao Cheng and Zeyu Wang and Rae Ying Yee Wong and Zhuoming Chen and Daiyaan Arfeen and Reyna Abhyankar and Zhihao Jia},
55
+ year={2023},
56
+ eprint={2305.09781},
57
+ archivePrefix={arXiv},
58
+ primaryClass={cs.CL}
59
+ }
60
+ ```