Apel-sin commited on
Commit
75a40d1
·
1 Parent(s): fa5ab01

add measurement.json

Browse files
Files changed (2) hide show
  1. README.md +110 -0
  2. measurement.json +0 -0
README.md ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ license_link: https://huggingface.co/huihui-ai/Qwen2.5-Coder-14B-Instruct-abliterate/blob/main/LICENSE
4
+ language:
5
+ - en
6
+ base_model:
7
+ - huihui-ai/Qwen2.5-Coder-14B-Instruct-abliterated
8
+ pipeline_tag: text-generation
9
+ library_name: transformers
10
+ quantized_by: Apel-sin
11
+ tags:
12
+ - code
13
+ - codeqwen
14
+ - chat
15
+ - qwen
16
+ - qwen-coder
17
+ - abliterated
18
+ - uncensored
19
+ ---
20
+
21
+
22
+ # huihui-ai/Qwen2.5-Code-14B-Instruct-abliterated
23
+
24
+ This is an uncensored version of [Qwen/Qwen2.5-Coder-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
25
+
26
+ Qwen2.5-Coder uncensored version has covered six mainstream model sizes,
27
+ [0.5](https://huggingface.co/huihui-ai/Qwen2.5-Coder-0.5B-Instruct-abliterated),
28
+ [1.5](https://huggingface.co/huihui-ai/Qwen2.5-Coder-1.5B-Instruct-abliterated),
29
+ [3](https://huggingface.co/huihui-ai/Qwen2.5-Coder-3B-Instruct-abliterated),
30
+ [7](https://huggingface.co/huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated),
31
+ [14](https://huggingface.co/huihui-ai/Qwen2.5-Coder-14B-Instruct-abliterated),
32
+ [32](https://huggingface.co/huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated) billion parameters.
33
+
34
+
35
+ If the desired result is not achieved, you can clear the conversation and try again.
36
+ ## Usage
37
+ You can use this model in your applications by loading it with Hugging Face's `transformers` library:
38
+
39
+ ```python
40
+ from transformers import AutoModelForCausalLM, AutoTokenizer
41
+
42
+ # Load the model and tokenizer
43
+ model_name = "huihui-ai/Qwen2.5-Code-14B-Instruct-abliterated"
44
+ model = AutoModelForCausalLM.from_pretrained(
45
+ model_name,
46
+ torch_dtype="auto",
47
+ device_map="auto"
48
+ )
49
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
50
+
51
+ # Initialize conversation context
52
+ initial_messages = [
53
+ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}
54
+ ]
55
+ messages = initial_messages.copy() # Copy the initial conversation context
56
+
57
+ # Enter conversation loop
58
+ while True:
59
+ # Get user input
60
+ user_input = input("User: ").strip() # Strip leading and trailing spaces
61
+
62
+ # If the user types '/exit', end the conversation
63
+ if user_input.lower() == "/exit":
64
+ print("Exiting chat.")
65
+ break
66
+
67
+ # If the user types '/clean', reset the conversation context
68
+ if user_input.lower() == "/clean":
69
+ messages = initial_messages.copy() # Reset conversation context
70
+ print("Chat history cleared. Starting a new conversation.")
71
+ continue
72
+
73
+ # If input is empty, prompt the user and continue
74
+ if not user_input:
75
+ print("Input cannot be empty. Please enter something.")
76
+ continue
77
+
78
+ # Add user input to the conversation
79
+ messages.append({"role": "user", "content": user_input})
80
+
81
+ # Build the chat template
82
+ text = tokenizer.apply_chat_template(
83
+ messages,
84
+ tokenize=False,
85
+ add_generation_prompt=True
86
+ )
87
+
88
+ # Tokenize input and prepare it for the model
89
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
90
+
91
+ # Generate a response from the model
92
+ generated_ids = model.generate(
93
+ **model_inputs,
94
+ max_new_tokens=8192
95
+ )
96
+
97
+ # Extract model output, removing special tokens
98
+ generated_ids = [
99
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
100
+ ]
101
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
102
+
103
+ # Add the model's response to the conversation
104
+ messages.append({"role": "assistant", "content": response})
105
+
106
+ # Print the model's response
107
+ print(f"Qwen: {response}")
108
+
109
+ ```
110
+
measurement.json ADDED
The diff for this file is too large to render. See raw diff