update config
Browse files- .DS_Store +0 -0
- .gitignore +1 -0
- README.md +0 -32
- TinyLlama-1.1B-Chat-v0.2-q8f16_1-iphone.tar +0 -3
- mlc-chat-config.json +2 -2
.DS_Store
DELETED
Binary file (6.15 kB)
|
|
.gitignore
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
.DS_Store
|
README.md
DELETED
@@ -1,32 +0,0 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
datasets:
|
4 |
-
- cerebras/SlimPajama-627B
|
5 |
-
- bigcode/starcoderdata
|
6 |
-
- timdettmers/openassistant-guanaco
|
7 |
-
language:
|
8 |
-
- en
|
9 |
-
---
|
10 |
-
|
11 |
-
# TinyLlama-1.1B pre-built lib & weights for mlc-chat iOS.
|
12 |
-
|
13 |
-
https://github.com/jzhang38/TinyLlama
|
14 |
-
|
15 |
-
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
|
16 |
-
|
17 |
-
<div align="center">
|
18 |
-
<img src="./TinyLlama_logo.png" width="300"/>
|
19 |
-
</div>
|
20 |
-
|
21 |
-
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
|
22 |
-
|
23 |
-
#### This Model
|
24 |
-
This is the chat model finetuned on [PY007/TinyLlama-1.1B-intermediate-step-240k-503b](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-240k-503b). The dataset used is [openassistant-guananco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco).
|
25 |
-
|
26 |
-
#### How to use
|
27 |
-
|
28 |
-
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
|
29 |
-
|
30 |
-
https://github.com/mlc-ai/mlc-llm
|
31 |
-
|
32 |
-
https://mlc.ai/mlc-llm/docs/deploy/ios.html
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
TinyLlama-1.1B-Chat-v0.2-q8f16_1-iphone.tar
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:b319dee9dd9d5c6d48ae661e02a1efecd9ad4d84ab2625d6abb7ff54ab0ab06a
|
3 |
-
size 270577
|
|
|
|
|
|
|
|
mlc-chat-config.json
CHANGED
@@ -1,7 +1,7 @@
|
|
1 |
{
|
2 |
"model_lib": "TinyLlama-1.1B-Chat-v0.2-q8f16_1",
|
3 |
"local_id": "TinyLlama-1.1B-Chat-v0.2-q8f16_1",
|
4 |
-
"conv_template": "
|
5 |
"temperature": 0.7,
|
6 |
"repetition_penalty": 1.0,
|
7 |
"top_p": 0.95,
|
@@ -18,4 +18,4 @@
|
|
18 |
"model_category": "llama",
|
19 |
"model_name": "TinyLlama-1.1B-Chat-v0.2",
|
20 |
"vocab_size": 32003
|
21 |
-
}
|
|
|
1 |
{
|
2 |
"model_lib": "TinyLlama-1.1B-Chat-v0.2-q8f16_1",
|
3 |
"local_id": "TinyLlama-1.1B-Chat-v0.2-q8f16_1",
|
4 |
+
"conv_template": "chatml",
|
5 |
"temperature": 0.7,
|
6 |
"repetition_penalty": 1.0,
|
7 |
"top_p": 0.95,
|
|
|
18 |
"model_category": "llama",
|
19 |
"model_name": "TinyLlama-1.1B-Chat-v0.2",
|
20 |
"vocab_size": 32003
|
21 |
+
}
|