File size: 1,104 Bytes
2924b89
3901d4a
251aaa8
 
 
2924b89
 
 
 
81a92b4
2924b89
 
 
 
 
953054f
2924b89
 
 
 
 
 
b5617f0
2924b89
 
 
 
 
 
 
b5617f0
2924b89
 
 
 
 
 
 
953054f
3901d4a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
license: apache-2.0
pipeline_tag: text-generation
tags:
- cortex.cpp
---

## Overview

The [TinyLlama](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) project aims to pretrain a 1.1B Llama model on 3 trillion tokens. This is the chat model finetuned  on a diverse range of synthetic dialogues generated by ChatGPT.

## Variants

| No | Variant | Cortex CLI command |
| --- | --- | --- |
| 1 | [TinyLLama-1b](https://huggingface.co/cortexso/tinyllama/tree/1b) | `cortex run tinyllama:1b` |


## Use it with Jan (UI)

1. Install **Jan** using [Quickstart](https://jan.ai/docs/quickstart)
2. Use in Jan model Hub:
    ```bash
    cortexhub/tinyllama
    ```
    
## Use it with Cortex (CLI)

1. Install **Cortex** using [Quickstart](https://cortex.jan.ai/docs/quickstart)
2. Run the model with command:
    ```bash
    cortex run tinyllama
    ```
    
## Credits

- **Author:** Microsoft
- **Converter:** [Homebrew](https://www.homebrew.ltd/)
- **Original License:** [License](https://choosealicense.com/licenses/apache-2.0/)
- **Papers:** [Tinyllama Paper](https://arxiv.org/abs/2401.02385)