File size: 4,533 Bytes
de37dcb
95267e4
 
 
 
de37dcb
95267e4
 
 
 
 
 
 
 
 
 
 
 
 
de37dcb
95267e4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
---
inference: false
language:
- zh
- en
license: unknown
model_name: Chihiro-7B-v0.1
pipeline_tag: text-generation
prompt_template: '<s> SYS_PROMPT [INST] QUERY1 [/INST] RESPONSE1 [INST] QUERY2 [/INST]'
quantized_by: yuuko-eth
tags:
- nlp
- chinese
- mistral
- traditional_chinese
- merge
- mergekit
- MediaTek-Research/Breeze-7B-Instruct-v0_1
- mlabonne/Zebrafish-7B
---


# Chihiro-7B-v0.1-GGUF
- Model creator: [yuuko-eth](https://huggingface.co/yuuko-eth)
- Original model: [Chihiro-7B-v0.1](https://huggingface.co/yuuko-eth/Chihiro-7B-v0.1)

<!-- description start -->
## Description

This repo contains GGUF format model files for [Chihiro-7B-v0.1](https://huggingface.co/yuuko-eth/Chihiro-7B-v0.1).

<!-- description end -->

### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.


---

> Original `README.MD` is as follows.

---


<br/>

# 千尋 7B v0.1

Zebrafish 7B 加上 Breeze 7B 的 slerp merge 試驗性通用繁中基座模型 📚

GGUF Quants 👉 [Chihiro-7B-v0.1-GGUF](https://huggingface.co/yuuko-eth/Chihiro-7B-v0.1-GGUF)

請用 Mistral 7B Instruct 或是 Breeze 7B Instruct 所推薦的 Prompt 格式進行操作;以下為模型配置。

![](https://i.imgur.com/UwNO4fS.png)

### Chihiro 7B v0.1

This is an experimental Mistral-architecture SLERP merge with two brilliant base models. Zebrafish and Breeze were used together in this work.

Model configuration is as follows:

* [Breeze-7B-Instruct](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0_1) as base.
* [Zebrafish-7B](https://huggingface.co/mlabonne/Zebrafish-7B) as model 1.

To use the model, please use either prompt templates suggested by the base models, or just slap the Mistral one on.

<br/><br/>

### Benchmarks

Evaluation suite: OpenLLM
|                               Model                               | ARC |HellaSwag|           MMLU           |TruthfulQA|Winogrande|GSM8K|
|-------------------------------------------------------------------|----:|--------:|--------------------------|---------:|---------:|----:|
|[Chihiro-7B-v0.1](https://huggingface.co/yuuko-eth/Chihiro-7B-v0.1)|68.52|    85.95| (not yet evaluated) |     63.81|     81.77|64.22|


Evaluation suite: Nous
|                               Model                               |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|-------------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[Chihiro-7B-v0.1](https://huggingface.co/yuuko-eth/Chihiro-7B-v0.1)|  45.16|  75.26|     63.82|   47.38|  57.91|


Average: 47.38%

Average score: 57.91%

Evaluated Apr. 27, 2024, NVIDIA RTX 4090

<br/><br/>