GGUF
English
File size: 1,104 Bytes
a225f2d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
740cb6f
a225f2d
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
base_model: PY007/TinyLlama-1.1B-Chat-v0.3
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- OpenAssistant/oasst_top1_2023-08-25
inference: false
language:
- en
license: apache-2.0
model_creator: Zhang Peiyuan
model_name: TinyLlama 1.1B Chat v0.3
model_type: tinyllama
prompt_template: '<|im_start|>system

  {system_message}<|im_end|>

  <|im_start|>user

  {prompt}<|im_end|>

  <|im_start|>assistant

  '
quantized_by: TheBloke
---

# TinyLlama 1.1B Chat v0.3 - GGUF

- Model creator: [Zhang Peiyuan](https://huggingface.co/PY007)
- Original model: [TinyLlama 1.1B Chat v0.3](https://huggingface.co/PY007/TinyLlama-1.1B-Chat-v0.3)
- TheBloke quant: [TinyLlama-1.1B-Chat-v0.3-GGUF](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGUF) 

## Support for `calm`

These models support the [calm](https://github.com/iandennismiller/calm) language model runner.
The particular quants selected for this repo are in support of [calm](https://github.com/iandennismiller/calm), which is a language model runner that automatically uses the right prompts, templates, context size, etc.