Commit
·
5f570fc
1
Parent(s):
c5a4e9c
Upload OpenAssistant/pythia-12b-sft-v8-7k-steps ctranslate fp16 weights
Browse files- README.md +112 -0
- config.json +5 -0
- generation_config.json +6 -0
- model.bin +3 -0
- special_tokens_map.json +14 -0
- tokenizer.json +0 -0
- tokenizer_config.json +10 -0
- vocabulary.txt +0 -0
README.md
ADDED
@@ -0,0 +1,112 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
tags:
|
6 |
+
- ctranslate2
|
7 |
+
- int8
|
8 |
+
- float16
|
9 |
+
- sft
|
10 |
+
pipeline_tag: text-generation
|
11 |
+
widget:
|
12 |
+
- text: <|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>
|
13 |
+
- text: <|prompter|>What's the Earth total population<|endoftext|><|assistant|>
|
14 |
+
- text: <|prompter|>Write a story about future of AI development<|endoftext|><|assistant|>
|
15 |
+
---
|
16 |
+
# # Fast-Inference with Ctranslate2
|
17 |
+
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
|
18 |
+
|
19 |
+
quantized version of [OpenAssistant/pythia-12b-sft-v8-7k-steps](https://huggingface.co/OpenAssistant/pythia-12b-sft-v8-7k-steps)
|
20 |
+
```bash
|
21 |
+
pip install hf-hub-ctranslate2>=2.0.6
|
22 |
+
```
|
23 |
+
Converted on 2023-05-19 using
|
24 |
+
```
|
25 |
+
ct2-transformers-converter --model OpenAssistant/pythia-12b-sft-v8-7k-steps --output_dir /home/feil_m/tmp-ct2fast-pythia-12b-sft-v8-7k-steps --force --copy_files tokenizer.json README.md tokenizer_config.json generation_config.json special_tokens_map.json .gitattributes --quantization float16
|
26 |
+
```
|
27 |
+
|
28 |
+
Checkpoint compatible to [ctranslate2>=3.13.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.0.6](https://github.com/michaelfeil/hf-hub-ctranslate2)
|
29 |
+
- `compute_type=int8_float16` for `device="cuda"`
|
30 |
+
- `compute_type=int8` for `device="cpu"`
|
31 |
+
|
32 |
+
```python
|
33 |
+
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
|
34 |
+
from transformers import AutoTokenizer
|
35 |
+
|
36 |
+
model_name = "michaelfeil/ct2fast-pythia-12b-sft-v8-7k-steps"
|
37 |
+
# use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
|
38 |
+
model = GeneratorCT2fromHfHub(
|
39 |
+
# load in int8 on CUDA
|
40 |
+
model_name_or_path=model_name,
|
41 |
+
device="cuda",
|
42 |
+
compute_type="int8_float16",
|
43 |
+
tokenizer=AutoTokenizer.from_pretrained("OpenAssistant/pythia-12b-sft-v8-7k-steps")
|
44 |
+
)
|
45 |
+
outputs = model.generate(
|
46 |
+
text=["How do you call a fast Flan-ingo?", "User: How are you doing? Bot:"],
|
47 |
+
)
|
48 |
+
print(outputs)
|
49 |
+
```
|
50 |
+
|
51 |
+
# Licence and other remarks:
|
52 |
+
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
|
53 |
+
|
54 |
+
# Original description
|
55 |
+
|
56 |
+
- base model: [OpenAssistant/pythia-12b-pre-v8-12.5k-steps](https://huggingface.co/OpenAssistant/pythia-12b-pre-v8-12.5k-steps)
|
57 |
+
- wandb: https://wandb.ai/open-assistant/supervised-finetuning/runs/pcw1ejda
|
58 |
+
- [sampling report](https://raw.githubusercontent.com/Open-Assistant/oasst-model-eval/main/sampling_reports/oasst-sft/2023-05-07_OpenAssistant_pythia-12b-sft-v8-7k-steps_sampling_noprefix2.json)
|
59 |
+
|
60 |
+
```
|
61 |
+
pythia-12b-sft-8:
|
62 |
+
dtype: fp16
|
63 |
+
log_dir: "pythia_log_12b"
|
64 |
+
learning_rate: 6e-6
|
65 |
+
model_name: OpenAssistant/pythia-12b-pre-v8-12.5k-steps
|
66 |
+
output_dir: pythia_model_12b
|
67 |
+
weight_decay: 0.0
|
68 |
+
residual_dropout: 0.0
|
69 |
+
max_length: 2048
|
70 |
+
use_flash_attention: true
|
71 |
+
warmup_steps: 100
|
72 |
+
gradient_checkpointing: true
|
73 |
+
gradient_accumulation_steps: 2
|
74 |
+
per_device_train_batch_size: 4
|
75 |
+
per_device_eval_batch_size: 4
|
76 |
+
eval_steps: 251
|
77 |
+
save_steps: 500
|
78 |
+
num_train_epochs: 8
|
79 |
+
save_total_limit: 4
|
80 |
+
num_train_epochs: 8
|
81 |
+
save_total_limit: 3
|
82 |
+
use_custom_sampler: true
|
83 |
+
sort_by_length: false
|
84 |
+
save_strategy: steps
|
85 |
+
datasets:
|
86 |
+
- oasst_export:
|
87 |
+
lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk"
|
88 |
+
input_file_path: 2023-05-06_OASST_labels.jsonl.gz
|
89 |
+
val_split: 0.05
|
90 |
+
- vicuna:
|
91 |
+
val_split: 0.05
|
92 |
+
max_val_set: 800
|
93 |
+
fraction: 0.4
|
94 |
+
- dolly15k:
|
95 |
+
val_split: 0.05
|
96 |
+
max_val_set: 300
|
97 |
+
- grade_school_math_instructions:
|
98 |
+
val_split: 0.05
|
99 |
+
- code_alpaca:
|
100 |
+
val_split: 0.05
|
101 |
+
max_val_set: 250
|
102 |
+
- red_pajama:
|
103 |
+
fraction: 0.05
|
104 |
+
max_val_set: 1000
|
105 |
+
- wizardlm_70k:
|
106 |
+
val_split: 0.05
|
107 |
+
max_val_set: 500
|
108 |
+
fraction: 0.4
|
109 |
+
- poem_instructions:
|
110 |
+
fraction: 0.5
|
111 |
+
val_split: 0.025
|
112 |
+
```
|
config.json
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"bos_token": "<|endoftext|>",
|
3 |
+
"eos_token": "<|endoftext|>",
|
4 |
+
"unk_token": "<|endoftext|>"
|
5 |
+
}
|
generation_config.json
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_from_model_config": true,
|
3 |
+
"bos_token_id": 0,
|
4 |
+
"eos_token_id": 0,
|
5 |
+
"transformers_version": "4.28.0.dev0"
|
6 |
+
}
|
model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:78d2692d1abb075f34f6fc22b99785dd8429a3fbda460b81a8643537975ac029
|
3 |
+
size 23683674702
|
special_tokens_map.json
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"additional_special_tokens": [
|
3 |
+
"<|prefix_begin|>",
|
4 |
+
"<|prompter|>",
|
5 |
+
"<|assistant|>",
|
6 |
+
"<|system|>",
|
7 |
+
"<|prefix_end|>"
|
8 |
+
],
|
9 |
+
"bos_token": "<|endoftext|>",
|
10 |
+
"eos_token": "<|endoftext|>",
|
11 |
+
"pad_token": "<|padding|>",
|
12 |
+
"sep_token": "<|endoftext|>",
|
13 |
+
"unk_token": "<|endoftext|>"
|
14 |
+
}
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_prefix_space": false,
|
3 |
+
"bos_token": "<|endoftext|>",
|
4 |
+
"clean_up_tokenization_spaces": true,
|
5 |
+
"eos_token": "<|endoftext|>",
|
6 |
+
"model_max_length": 1000000000000000019884624838656,
|
7 |
+
"special_tokens_map_file": "/admin/home-hailey/.cache/huggingface/hub/models--EleutherAI--gpt-neox-20b/snapshots/4e49eadb5d14bd22f314ec3f45b69a87b88c7691/special_tokens_map.json",
|
8 |
+
"tokenizer_class": "GPTNeoXTokenizer",
|
9 |
+
"unk_token": "<|endoftext|>"
|
10 |
+
}
|
vocabulary.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|