File size: 3,832 Bytes
673245d
 
 
 
 
 
 
 
5021525
 
 
 
 
 
 
 
 
 
 
8d6c76c
5021525
8d6c76c
5021525
8d6c76c
 
5021525
 
 
8d6c76c
 
5021525
8d6c76c
 
5021525
 
 
 
 
 
 
 
8d6c76c
 
376a2c3
5021525
 
 
 
 
 
 
 
 
8d6c76c
 
5021525
bd747c4
5021525
 
8d6c76c
5021525
 
bd747c4
5021525
 
 
 
 
8d6c76c
 
5021525
8d6c76c
5021525
8d6c76c
5021525
 
 
 
8d6c76c
 
 
 
5021525
 
8d6c76c
 
5021525
673245d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
license: other
language:
- zh
tags:
- Text Generation
- LLM
---
<div align="center">
<h1>
 DevOps-Model-7B-Chat
</h1>
</div>

<p align="center">
🤗 <a href="https://huggingface.co/codefuse-ai" target="_blank">Hugging Face</a> • 
🤖 <a href="https://modelscope.cn/organization/codefuse-ai" target="_blank">ModelScope</a> 
</p>

DevOps-Model is a Chinese **DevOps large model**, mainly dedicated to exerting practical value in the field of DevOps. Currently, DevOps-Model can help engineers answer questions encountered in the all DevOps life cycle.

Based on the Qwen series of models, we output the **Base** model after additional training with high-quality Chinese DevOps corpus, and then output the **Chat** model after alignment with DevOps QA data. Our Base model and Chat model can achieve the best results among models of the same scale based on evaluation data related to the DevOps fields.

<br>
At the same time, we are also building an evaluation benchmark [DevOpsEval](https://github.com/codefuse-ai/codefuse-devops-eval) exclusive to the DevOps field to better evaluate the effect of the DevOps field model.
<br>
<br>

# Evaluation
We first selected a total of six exams related to DevOps in the two evaluation data sets of CMMLU and CEval. There are a total of 574 multiple-choice questions. The specific information is as follows:

| Evaluation dataset | Exam subjects | Number of questions |
|:-------:|:-------:|:-------:|
|   CMMLU  | Computer science | 204 |
|   CMMLU  | Computer security | 171 |
|   CMMLU  | Machine learning | 122 |
| CEval   | College programming | 37 |
| CEval   | Computer architecture | 21 |
| CEval   | Computernetwork | 19 |


We tested the results of Zero-shot and Five-shot respectively. Our 7B and 14B series models can achieve the best results among the tested models. More tests will be released later.

|Model|Size|Zero-shot Score|Five-shot Score|
|--|--|--|--|
|**DevOps-Model-7B-Chat**|**7B**|**62.20**|**64.11**|
|Qwen-7B-Chat|7B|46.00|52.44|
|Baichuan2-7B-Chat|7B|52.26|54.46|
|Internlm-7B-Chat|7B|52.61|55.75|


<br>

# Quickstart
We provide simple examples to illustrate how to quickly use Devops-Model-Chat models with 🤗 Transformers.

## Requirement
```bash
cd path_to_download_model
pip install -r requirements.txt
```

## Model Example

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig

tokenizer = AutoTokenizer.from_pretrained("path_to_DevOps-Model", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("path_to_DevOps-Model", device_map="auto", trust_remote_code=True, bf16=True).eval()

model.generation_config = GenerationConfig.from_pretrained("path_to_DevOps-Model", trust_remote_code=True)

resp, hist = model.chat(query='What is the difference between HashMap and Hashtable in Java', tokenizer=tokenizer, history=None)
```



# Disclaimer
Due to the characteristics of language models, the content generated by the model may contain hallucinations or discriminatory remarks. Please use the content generated by the DevOps-Model family of models with caution.
If you want to use this model service publicly or commercially, please note that the service provider needs to bear the responsibility for the adverse effects or harmful remarks caused by it. The developer of this project does not assume any responsibility for any consequences caused by the use of this project (including but not limited to data, models, codes, etc.) ) resulting in harm or loss.



# Acknowledgments
This project refers to the following open source projects, and I would like to express my gratitude to the relevant projects and research and development personnel.
- [LLaMA-Efficient-Tuning](https://github.com/hiyouga/LLaMA-Efficient-Tuning)
- [QwenLM](https://github.com/QwenLM)