JessyTsu1 commited on
Commit
3bab6d9
1 Parent(s): c39d88d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +124 -2
README.md CHANGED
@@ -1,4 +1,126 @@
1
  ---
2
- license: unknown
 
 
 
 
 
 
 
 
 
3
  ---
4
- Tutorials can be found in [Machine_Mindset_en_ENFP](https://huggingface.co/FarReelAILab/Machine_Mindset_en_ENFP) and [Github Machine_Mindset](https://github.com/PKU-YuanGroup/Machine-Mindset).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - zh
4
+ - en
5
+ tags:
6
+ - MachineMindset
7
+ - MBTI
8
+ pipeline_tag: text-generation
9
+ inference: false
10
+
11
+
12
  ---
13
+
14
+
15
+
16
+ <p align="center">
17
+ <img src="https://raw.githubusercontent.com/PKU-YuanGroup/Machine-Mindset/main/images/logo.png" width="650" style="margin-bottom: 0.2;"/>
18
+ <p>
19
+ <h2 align="center"> <a href="https://arxiv.org/abs/2311.10122">Machine Mindset: An MBTI Exploration of Large Language Models</a></h2>
20
+ <h5 align="center"> If you like our project, please give us a star ⭐ </h2>
21
+ <h4 align="center"> [ English | <a href="https://huggingface.co/FarReelAILab/Machine_Mindset_zh_INTP">中文</a> | <a href="https://github.com/PKU-YuanGroup/Machine-Mindset/blob/main/README_ja.md">日本語</a> ]
22
+
23
+ <br>
24
+
25
+ ### Introduction
26
+
27
+ **MM_en_ENFJ (Machine_Mindset_en_ENFJ)** is an English large language model developed through a collaboration between FarReel AI Lab and Peking University Deep Research Institute, based on Llama2-7b-chat-hf with an MBTI personality type of ENFJ.
28
+
29
+ MM_en_ENFJ has undergone extensive training, including the creation of a large-scale MBTI dataset, multi-stage fine-tuning, and DPO training. We are committed to continuously updating the model to improve its performance and regularly supplementing it with test data. This repository serves as the storage for the MM_en_ENFJ model.
30
+
31
+ The foundational personality trait of **MM_en_ENFJ (Machine_Mindset_en_ENFJ)** is **ENFJ**. Detailed characteristics can be found in [16personalities](https://www.16personalities.com/).
32
+
33
+ If you would like to learn more about the Machine_Mindset open-source model, we recommend that you visit the [GitHub repository](https://github.com/PKU-YuanGroup/Machine-Mindset/) for additional details.<br>
34
+
35
+ ### Requirements
36
+
37
+ * python 3.8 and above
38
+ * pytorch 1.12 and above, 2.0 and above are recommended
39
+ * CUDA 11.4 and above are recommended (this is for GPU users, flash-attention users, etc.)
40
+
41
+
42
+ ### Quickstart
43
+
44
+ * Using the HuggingFace Transformers library (single-turn dialogue):
45
+ ```bash
46
+ import torch
47
+ from transformers import AutoModelForCausalLM, AutoTokenizer
48
+ from transformers.generation.utils import GenerationConfig
49
+
50
+ tokenizer = AutoTokenizer.from_pretrained("FarReelAILab/Machine_Mindset_en_ENFJ", use_fast=False, trust_remote_code=True)
51
+ model = AutoModelForCausalLM.from_pretrained("FarReelAILab/Machine_Mindset_en_ENFJ", device_map="auto", torch_dtype=torch.float16, trust_remote_code=True)
52
+ model.generation_config = GenerationConfig.from_pretrained("FarReelAILab/Machine_Mindset_en_ENFJ")
53
+
54
+ messages = []
55
+ messages.append({"role": "user", "content": "What is your MBTI personality type?"})
56
+ response = model.chat(tokenizer, messages)
57
+ print(response)
58
+
59
+ messages.append({'role': 'assistant', 'content': response})
60
+ messages.append({"role": "user", "content": "After spending a day with a group of people, how do you feel when you return home?"})
61
+ response = model.chat(tokenizer, messages)
62
+ print(response)
63
+ ```
64
+
65
+ * Using the HuggingFace Transformers library (multi-turn dialogue):
66
+ ```bash
67
+ import torch
68
+ from transformers import AutoModelForCausalLM, AutoTokenizer
69
+ from transformers.generation.utils import GenerationConfig
70
+ tokenizer = AutoTokenizer.from_pretrained("FarReelAILab/Machine_Mindset_en_ENFJ", use_fast=False, trust_remote_code=True)
71
+ model = AutoModelForCausalLM.from_pretrained("FarReelAILab/Machine_Mindset_en_ENFJ", device_map="auto", torch_dtype=torch.float16, trust_remote_code=True)
72
+ model.generation_config = GenerationConfig.from_pretrained("FarReelAILab/Machine_Mindset_en_ENFJ")
73
+ messages = []
74
+ print("####Enter 'exit' to exit.")
75
+ print("####Enter 'clear' to clear the chat history.")
76
+ while True:
77
+ user=str(input("User:"))
78
+ if user.strip()=="exit":
79
+ break
80
+ elif user.strip()=="clear":
81
+ messages=[]
82
+ continue
83
+ messages.append({"role": "user", "content": user})
84
+ response = model.chat(tokenizer, messages)
85
+ print("Assistant:", response)
86
+ messages.append({"role": "assistant", "content": str(response)})
87
+ ```
88
+
89
+
90
+ * Use LLaMA-Factory (multi-round conversation)
91
+ ```bash
92
+ git clone https://github.com/hiyouga/LLaMA-Factory.git
93
+ cd LLaMA-Factory
94
+ python ./src/cli_demo.py \
95
+ --model_name_or_path /path_to_your_local_model \
96
+ --template llama2
97
+ ```
98
+
99
+ For more information, please refer to our [GitHub repo](https://github.com/PKU-YuanGroup/Machine-Mindset/).
100
+ <br>
101
+
102
+ ### Citation
103
+
104
+ If you find our work helpful, feel free to give us a cite.
105
+
106
+ ```
107
+ @article{cui2023machine,
108
+ title={Machine Mindset: An MBTI Exploration of Large Language Models},
109
+ author={Cui, Jiaxi and Lv, Liuzhenghao and Wen, Jing and Tang, Jing and Tian, YongHong and Yuan, Li},
110
+ journal={arXiv preprint arXiv:2312.12999},
111
+ year={2023}
112
+ }
113
+ ```
114
+
115
+ ### License Agreement
116
+ Our code follows the Apache 2.0 open-source license. Please check [LICENSE](https://github.com/PKU-YuanGroup/Machine-Mindset/blob/main/LICENSE) for specific details regarding the open-source agreement.
117
+
118
+ The model weights we provide are based on the original weights, and thus follow the original open-source agreement.
119
+
120
+ The Chinese version models are based on the baichuan open-source agreement. It is suitable for commercial use. You can refer to [model_LICENSE](https://huggingface.co/JessyTsu1/Machine_Mindset_en_ENFJ/resolve/main/Machine_Mindset%E5%9F%BA%E4%BA%8Ebaichuan%E7%9A%84%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf) for specific details.
121
+
122
+ The English version models are based on the open-source agreement provided by llama2. You can refer to [llama2 open-source license](https://ai.meta.com/resources/models-and-libraries/llama-downloads/).
123
+
124
+ ### Contact Us
125
+
126
+ Feel free to send an email to jiaxicui446@gmail.com, lvliuzh@stu.pku.edu.cn