File size: 1,820 Bytes
52dadfa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5ee4f00
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
license: mit
datasets:
- m-a-p/COIG-CQIA
language:
- zh
- en
metrics:
- accuracy
pipeline_tag: text2text-generation
tags:
- finance
- legal
- medical
- code
- biology
---

# Model Summary

Llama3-8B-COIG-CQIA is an instruction-tuned language model for Chinese & English users with various abilities such as roleplaying & tool-using built upon the Meta-Llama-3-8B-Instruct model.

Developed by: [Wenfeng Qiu](https://github.com/summit4you) 

- License: [Llama-3 License](https://llama.meta.com/llama3/license/)
- Base Model: Meta-Llama-3-8B-Instruct
- Model Size: 8.03B
- Context length: 8K

# 1. Introduction

Training framework: [unsloth](https://github.com/unslothai/unsloth).

Training details:
- epochs: 1
- learning rate: 2e-4
- learning rate scheduler type: linear
- warmup steps: 5
- cutoff len (i.e. context length): 2048
- global batch size: 2
- fine-tuning type: full parameters
- optimizer: adamw_8bit

# 2. Usage

Inference,  use to  `llama.cpp` or a UI based system like `GPT4All`. You can install GPT4All by going [here](https://gpt4all.io/index.html).

Here is the example in `llama.cpp`.

```python
from llama_cpp import Llama

model = Llama(
    "/Your/Path/To/Llama3-8B-COIG-CQIA.Q8_0.gguf",
    verbose=False,
    n_gpu_layers=-1,
)

system_prompt = "You are a helpful assistant."

def generate_reponse(_model, _messages, _max_tokens=8192):
    _output = _model.create_chat_completion(
        _messages,
        stop=["<|eot_id|>", "<|end_of_text|>"],
        max_tokens=_max_tokens,
    )["choices"][0]["message"]["content"]
    return _output

# The following are some examples

messages = [
    {
        "role": "system",
        "content": system_prompt,
    },
    {"role": "user", "content": "你是谁?"},
]


print(generate_reponse(_model=model, _messages=messages), end="\n\n\n")

```