nlile commited on
Commit
a19b823
·
verified ·
1 Parent(s): f6a553f

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +255 -0
README.md ADDED
@@ -0,0 +1,255 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ license_link: https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507/blob/main/LICENSE
5
+ base_model:
6
+ - Qwen/Qwen3-4B-Thinking-2507
7
+ pipeline_tag: text-generation
8
+ tags:
9
+ - abliterated
10
+ - uncensored
11
+ ---
12
+ # huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated
13
+
14
+ This is an uncensored version of [Qwen/Qwen3-4B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
15
+ This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
16
+
17
+ Ablation was performed using a new and faster method, which yields better results.
18
+
19
+ ## ollama
20
+
21
+ You can use [huihui_ai/qwen3-abliterated:4b-thinking-2507-q4_K_M](https://ollama.com/huihui_ai/qwen3-abliterated:4b-thinking-2507-q4_K_M) directly,
22
+ ```
23
+ ollama run huihui_ai/qwen3-abliterated:4b-thinking-2507-q4_K_M
24
+ ```
25
+
26
+
27
+ ## Usage
28
+ You can use this model in your applications by loading it with Hugging Face's `transformers` library:
29
+
30
+
31
+ ```python
32
+ from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, TextStreamer
33
+ import torch
34
+ import os
35
+ import signal
36
+ import random
37
+ import numpy as np
38
+ import time
39
+ from collections import Counter
40
+
41
+ cpu_count = os.cpu_count()
42
+ print(f"Number of CPU cores in the system: {cpu_count}")
43
+ half_cpu_count = cpu_count // 2
44
+ os.environ["MKL_NUM_THREADS"] = str(half_cpu_count)
45
+ os.environ["OMP_NUM_THREADS"] = str(half_cpu_count)
46
+ torch.set_num_threads(half_cpu_count)
47
+
48
+ print(f"PyTorch threads: {torch.get_num_threads()}")
49
+ print(f"MKL threads: {os.getenv('MKL_NUM_THREADS')}")
50
+ print(f"OMP threads: {os.getenv('OMP_NUM_THREADS')}")
51
+
52
+ # Load the model and tokenizer
53
+ NEW_MODEL_ID = "huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated"
54
+ print(f"Load Model {NEW_MODEL_ID} ... ")
55
+ quant_config_4 = BitsAndBytesConfig(
56
+ load_in_4bit=True,
57
+ bnb_4bit_compute_dtype=torch.bfloat16,
58
+ bnb_4bit_use_double_quant=True,
59
+ llm_int8_enable_fp32_cpu_offload=True,
60
+ )
61
+
62
+ model = AutoModelForCausalLM.from_pretrained(
63
+ NEW_MODEL_ID,
64
+ device_map="balanced",
65
+ trust_remote_code=True,
66
+ quantization_config=quant_config_4,
67
+ torch_dtype=torch.bfloat16,
68
+ low_cpu_mem_usage=True,
69
+ )
70
+ #print(model)
71
+ #print(model.config)
72
+
73
+ tokenizer = AutoTokenizer.from_pretrained(NEW_MODEL_ID, trust_remote_code=True)
74
+ if tokenizer.pad_token is None:
75
+ tokenizer.pad_token = tokenizer.eos_token
76
+ tokenizer.pad_token_id = tokenizer.eos_token_id
77
+
78
+ messages = []
79
+ skip_prompt=True
80
+ skip_special_tokens=True
81
+ do_sample = True
82
+
83
+ class CustomTextStreamer(TextStreamer):
84
+ def __init__(self, tokenizer, skip_prompt=True, skip_special_tokens=True):
85
+ super().__init__(tokenizer, skip_prompt=skip_prompt, skip_special_tokens=skip_special_tokens)
86
+ self.generated_text = ""
87
+ self.stop_flag = False
88
+ self.init_time = time.time() # Record initialization time
89
+ self.end_time = None # To store end time
90
+ self.first_token_time = None # To store first token generation time
91
+ self.token_count = 0 # To track total tokens
92
+
93
+ def on_finalized_text(self, text: str, stream_end: bool = False):
94
+ if self.first_token_time is None and text.strip(): # Set first token time on first non-empty text
95
+ self.first_token_time = time.time()
96
+ self.generated_text += text
97
+ # Count tokens in the generated text
98
+ tokens = self.tokenizer.encode(text, add_special_tokens=False)
99
+ self.token_count += len(tokens)
100
+ print(text, end="", flush=True)
101
+ if stream_end:
102
+ self.end_time = time.time() # Record end time when streaming ends
103
+ if self.stop_flag:
104
+ raise StopIteration
105
+
106
+ def stop_generation(self):
107
+ self.stop_flag = True
108
+ self.end_time = time.time() # Record end time when generation is stopped
109
+
110
+ def get_metrics(self):
111
+ """Returns initialization time, first token time, first token latency, end time, total time, total tokens, and tokens per second."""
112
+ if self.end_time is None:
113
+ self.end_time = time.time() # Set end time if not already set
114
+ total_time = self.end_time - self.init_time # Total time from init to end
115
+ tokens_per_second = self.token_count / total_time if total_time > 0 else 0
116
+ first_token_latency = (self.first_token_time - self.init_time) if self.first_token_time is not None else None
117
+ metrics = {
118
+ "init_time": self.init_time,
119
+ "first_token_time": self.first_token_time,
120
+ "first_token_latency": first_token_latency,
121
+ "end_time": self.end_time,
122
+ "total_time": total_time, # Total time in seconds
123
+ "total_tokens": self.token_count,
124
+ "tokens_per_second": tokens_per_second
125
+ }
126
+ return metrics
127
+
128
+ def generate_stream(model, tokenizer, messages, skip_prompt, skip_special_tokens, do_sample, max_new_tokens):
129
+ input_ids = tokenizer.apply_chat_template(
130
+ messages,
131
+ tokenize=True,
132
+ add_generation_prompt=True,
133
+ return_tensors="pt"
134
+ )
135
+ attention_mask = torch.ones_like(input_ids, dtype=torch.long)
136
+ tokens = input_ids.to(model.device)
137
+ attention_mask = attention_mask.to(model.device)
138
+
139
+ streamer = CustomTextStreamer(tokenizer, skip_prompt=skip_prompt, skip_special_tokens=skip_special_tokens)
140
+
141
+ def signal_handler(sig, frame):
142
+ streamer.stop_generation()
143
+ print("\n[Generation stopped by user with Ctrl+C]")
144
+
145
+ signal.signal(signal.SIGINT, signal_handler)
146
+
147
+ generate_kwargs = {}
148
+ if do_sample:
149
+ generate_kwargs = {
150
+ "do_sample": do_sample,
151
+ "max_length": max_new_tokens,
152
+ "temperature": 0.7,
153
+ "top_k": 20,
154
+ "top_p": 0.8,
155
+ "repetition_penalty": 1.2,
156
+ "no_repeat_ngram_size": 2
157
+ }
158
+ else:
159
+ generate_kwargs = {
160
+ "do_sample": do_sample,
161
+ "max_length": max_new_tokens,
162
+ "repetition_penalty": 1.2,
163
+ "no_repeat_ngram_size": 2
164
+ }
165
+
166
+
167
+ print("Response: ", end="", flush=True)
168
+ try:
169
+ generated_ids = model.generate(
170
+ tokens,
171
+ attention_mask=attention_mask,
172
+ #use_cache=False,
173
+ pad_token_id=tokenizer.pad_token_id,
174
+ streamer=streamer,
175
+ **generate_kwargs
176
+ )
177
+ del generated_ids
178
+ except StopIteration:
179
+ print("\n[Stopped by user]")
180
+
181
+ del input_ids, attention_mask
182
+ torch.cuda.empty_cache()
183
+ signal.signal(signal.SIGINT, signal.SIG_DFL)
184
+
185
+ return streamer.generated_text, streamer.stop_flag, streamer.get_metrics()
186
+
187
+ while True:
188
+ print(f"skip_prompt: {skip_prompt}")
189
+ print(f"skip_special_tokens: {skip_special_tokens}")
190
+ print(f"do_sample: {do_sample}")
191
+
192
+ user_input = input("User: ").strip()
193
+ if user_input.lower() == "/exit":
194
+ print("Exiting chat.")
195
+ break
196
+ if user_input.lower() == "/clear":
197
+ messages = []
198
+ print("Chat history cleared. Starting a new conversation.")
199
+ continue
200
+ if user_input.lower() == "/skip_prompt":
201
+ skip_prompt = not skip_prompt
202
+ continue
203
+ if user_input.lower() == "/skip_special_tokens":
204
+ skip_special_tokens = not skip_special_tokens
205
+ continue
206
+ if user_input.lower() == "/do_sample":
207
+ do_sample = not do_sample
208
+ continue
209
+ if not user_input:
210
+ print("Input cannot be empty. Please enter something.")
211
+ continue
212
+
213
+
214
+ messages.append({"role": "user", "content": user_input})
215
+ activated_experts = []
216
+ response, stop_flag, metrics = generate_stream(model, tokenizer, messages, skip_prompt, skip_special_tokens, do_sample, 40960)
217
+ print("\n\nMetrics:")
218
+ for key, value in metrics.items():
219
+ print(f" {key}: {value}")
220
+
221
+ print("", flush=True)
222
+ if stop_flag:
223
+ continue
224
+ messages.append({"role": "assistant", "content": response})
225
+
226
+
227
+ ```
228
+
229
+ ### Usage Warnings
230
+
231
+
232
+ - **Risk of Sensitive or Controversial Outputs**: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs.
233
+
234
+ - **Not Suitable for All Audiences**: Due to limited content filtering, the model’s outputs may be inappropriate for public settings, underage users, or applications requiring high security.
235
+
236
+ - **Legal and Ethical Responsibilities**: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences.
237
+
238
+ - **Research and Experimental Use**: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications.
239
+
240
+ - **Monitoring and Review Recommendations**: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content.
241
+
242
+ - **No Default Safety Guarantees**: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use.
243
+
244
+
245
+ ### Donation
246
+
247
+ If you like it, please click 'like' and follow us for more updates.
248
+ You can follow [x.com/support_huihui](https://x.com/support_huihui) to get the latest model information from huihui.ai.
249
+
250
+ ##### Your donation helps us continue our further development and improvement, a cup of coffee can do it.
251
+ - bitcoin(BTC):
252
+ ```
253
+ bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge
254
+ ```
255
+ - Support our work on Ko-fi (https://ko-fi.com/huihuiai)!