Upload folder using huggingface_hub
Browse files- .gitattributes +4 -0
- README.md +101 -0
- shizhi-twilight-7b.Q4_K_M.gguf +3 -0
- shizhi-twilight-7b.Q5_K_M.gguf +3 -0
- shizhi-twilight-7b.Q6_K.gguf +3 -0
- shizhi-twilight-7b.Q8_0.gguf +3 -0
- shizhi-twilight-7b.fp16.bin +3 -0
.gitattributes
CHANGED
@@ -33,3 +33,7 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
shizhi-twilight-7b.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
shizhi-twilight-7b.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
+
shizhi-twilight-7b.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
+
shizhi-twilight-7b.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,101 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
---
|
3 |
+
license: openrail
|
4 |
+
pipeline_tag: text-generation
|
5 |
+
library_name: transformers
|
6 |
+
language:
|
7 |
+
- en
|
8 |
+
- zh
|
9 |
+
---
|
10 |
+
|
11 |
+
Thanks to @s3nh for the great quantization notebook code.
|
12 |
+
|
13 |
+
## Original model card
|
14 |
+
|
15 |
+
Buy @s3nh a coffee if you like this project ;)
|
16 |
+
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
|
17 |
+
|
18 |
+
#### Description
|
19 |
+
|
20 |
+
GGUF Format model files for [This project](https://huggingface.co/{MODEL_ID}).
|
21 |
+
|
22 |
+
### GGUF Specs
|
23 |
+
|
24 |
+
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
|
25 |
+
|
26 |
+
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
|
27 |
+
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
|
28 |
+
mmap compatibility: models can be loaded using mmap for fast loading and saving.
|
29 |
+
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
|
30 |
+
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
|
31 |
+
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
|
32 |
+
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
|
33 |
+
inference or for identifying the model.
|
34 |
+
|
35 |
+
# Original model card
|
36 |
+
|
37 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6409720c9e9f790c905ba4bf/v6B0CkdpR74oCetV3w0y-.png)
|
38 |
+
|
39 |
+
|
40 |
+
# 試製-暮光-7B
|
41 |
+
|
42 |
+
試製-暮光-7B 是用[LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing)融合以下模型生成的:
|
43 |
+
* [MediaTek-Research/Breeze-7B-Instruct-v0_1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0_1)
|
44 |
+
* [argilla/CapybaraHermes-2.5-Mistral-7B](https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B)
|
45 |
+
|
46 |
+
這是一個實驗模型,目的是爲了檢驗套用在不同語言上的高品質模型調教是否能夠轉移(此模型爲英文到中文)。
|
47 |
+
|
48 |
+
|
49 |
+
# shizhi-twilight-7B
|
50 |
+
|
51 |
+
shizhi-twilight-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
52 |
+
* [MediaTek-Research/Breeze-7B-Instruct-v0_1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0_1)
|
53 |
+
* [argilla/CapybaraHermes-2.5-Mistral-7B](https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B)
|
54 |
+
|
55 |
+
This is an experiment product on checking whether high quality fine-tuning on one language (English) could be transferred to another language (Mandarin) leveraging Slerp merge method.
|
56 |
+
|
57 |
+
## 🧩 Configuration
|
58 |
+
|
59 |
+
```yaml
|
60 |
+
slices:
|
61 |
+
- sources:
|
62 |
+
- model: MediaTek-Research/Breeze-7B-Instruct-v0_1
|
63 |
+
layer_range: [0, 32]
|
64 |
+
- model: argilla/CapybaraHermes-2.5-Mistral-7B
|
65 |
+
layer_range: [0, 32]
|
66 |
+
merge_method: slerp
|
67 |
+
base_model: MediaTek-Research/Breeze-7B-Instruct-v0_1
|
68 |
+
parameters:
|
69 |
+
t:
|
70 |
+
- filter: self_attn
|
71 |
+
value: [0, 0.5, 0.3, 0.7, 1]
|
72 |
+
- filter: mlp
|
73 |
+
value: [1, 0.5, 0.7, 0.3, 0]
|
74 |
+
- value: 0.5
|
75 |
+
dtype: bfloat16
|
76 |
+
```
|
77 |
+
|
78 |
+
## 💻 Usage
|
79 |
+
|
80 |
+
```python
|
81 |
+
!pip install -qU transformers accelerate
|
82 |
+
|
83 |
+
from transformers import AutoTokenizer
|
84 |
+
import transformers
|
85 |
+
import torch
|
86 |
+
|
87 |
+
model = "lipcut/shizhi-twilight-7B"
|
88 |
+
messages = [{"role": "user", "content": "什麼是大型語言模型?"}]
|
89 |
+
|
90 |
+
tokenizer = AutoTokenizer.from_pretrained(model)
|
91 |
+
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
92 |
+
pipeline = transformers.pipeline(
|
93 |
+
"text-generation",
|
94 |
+
model=model,
|
95 |
+
torch_dtype=torch.float16,
|
96 |
+
device_map="auto",
|
97 |
+
)
|
98 |
+
|
99 |
+
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
100 |
+
print(outputs[0]["generated_text"])
|
101 |
+
```
|
shizhi-twilight-7b.Q4_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9460494ddd1fee53bfffa092e4c81dd4321cc87941e7adbde839014bd62320a0
|
3 |
+
size 7952364672
|
shizhi-twilight-7b.Q5_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ef0acc960f48145460243c4c2b9d9e75a541764f90675c76ac7a021294d549d9
|
3 |
+
size 9317872768
|
shizhi-twilight-7b.Q6_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fb5252409b7aa61bc223a179c4de29ef7cf4a9b89020f5f4765d4421327d2807
|
3 |
+
size 10768725120
|
shizhi-twilight-7b.Q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:30ada631163959ea26d9d63aee5405e5a90f1eded53a7a73c799c10cb13d831b
|
3 |
+
size 13947188352
|
shizhi-twilight-7b.fp16.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d9bd50f89cf87ca1a58ee0f4b636377206c3dca2c87f219654f51de2e9bc0496
|
3 |
+
size 26250916960
|