Initial GPTQ model commit
Browse files
README.md
ADDED
@@ -0,0 +1,305 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
inference: false
|
3 |
+
license: other
|
4 |
+
---
|
5 |
+
|
6 |
+
<!-- header start -->
|
7 |
+
<div style="width: 100%;">
|
8 |
+
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
9 |
+
</div>
|
10 |
+
<div style="display: flex; justify-content: space-between; width: 100%;">
|
11 |
+
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
12 |
+
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
|
13 |
+
</div>
|
14 |
+
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
15 |
+
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
16 |
+
</div>
|
17 |
+
</div>
|
18 |
+
<!-- header end -->
|
19 |
+
|
20 |
+
# Fire Balloon's Baichuan Vicuna 7B GPTQ
|
21 |
+
|
22 |
+
These files are GPTQ 4bit model files for [Fire Balloon's Baichuan Vicuna 7B](https://huggingface.co/fireballoon/baichuan-vicuna-7b).
|
23 |
+
|
24 |
+
It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
|
25 |
+
|
26 |
+
## Repositories available
|
27 |
+
|
28 |
+
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/baichuan-vicuna-7B-GPTQ)
|
29 |
+
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/none)
|
30 |
+
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/fireballoon/baichuan-vicuna-7b)
|
31 |
+
|
32 |
+
## How to easily download and use this model in text-generation-webui
|
33 |
+
|
34 |
+
Please make sure you're using the latest version of text-generation-webui
|
35 |
+
|
36 |
+
1. Click the **Model tab**.
|
37 |
+
2. Under **Download custom model or LoRA**, enter `TheBloke/baichuan-vicuna-7B-GPTQ`.
|
38 |
+
3. Click **Download**.
|
39 |
+
4. The model will start downloading. Once it's finished it will say "Done"
|
40 |
+
5. In the top left, click the refresh icon next to **Model**.
|
41 |
+
6. In the **Model** dropdown, choose the model you just downloaded: `baichuan-vicuna-7B-GPTQ`
|
42 |
+
7. The model will automatically load, and is now ready for use!
|
43 |
+
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
|
44 |
+
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
|
45 |
+
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
|
46 |
+
|
47 |
+
## How to use this GPTQ model from Python code
|
48 |
+
|
49 |
+
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
|
50 |
+
|
51 |
+
`pip install auto-gptq`
|
52 |
+
|
53 |
+
Then try the following example code:
|
54 |
+
|
55 |
+
```python
|
56 |
+
from transformers import AutoTokenizer, pipeline, logging
|
57 |
+
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
|
58 |
+
import argparse
|
59 |
+
|
60 |
+
model_name_or_path = "TheBloke/baichuan-vicuna-7B-GPTQ"
|
61 |
+
model_basename = "baichuan-vicuna-7b-GPTQ-4bit-128g.no-act.order"
|
62 |
+
|
63 |
+
use_triton = False
|
64 |
+
|
65 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
|
66 |
+
|
67 |
+
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
|
68 |
+
model_basename=model_basename,
|
69 |
+
use_safetensors=True,
|
70 |
+
trust_remote_code=False,
|
71 |
+
device="cuda:0",
|
72 |
+
use_triton=use_triton,
|
73 |
+
quantize_config=None)
|
74 |
+
|
75 |
+
# Note: check the prompt template is correct for this model.
|
76 |
+
prompt = "Tell me about AI"
|
77 |
+
prompt_template=f'''USER: {prompt}
|
78 |
+
ASSISTANT:'''
|
79 |
+
|
80 |
+
print("\n\n*** Generate:")
|
81 |
+
|
82 |
+
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
|
83 |
+
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
|
84 |
+
print(tokenizer.decode(output[0]))
|
85 |
+
|
86 |
+
# Inference can also be done using transformers' pipeline
|
87 |
+
|
88 |
+
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
|
89 |
+
logging.set_verbosity(logging.CRITICAL)
|
90 |
+
|
91 |
+
print("*** Pipeline:")
|
92 |
+
pipe = pipeline(
|
93 |
+
"text-generation",
|
94 |
+
model=model,
|
95 |
+
tokenizer=tokenizer,
|
96 |
+
max_new_tokens=512,
|
97 |
+
temperature=0.7,
|
98 |
+
top_p=0.95,
|
99 |
+
repetition_penalty=1.15
|
100 |
+
)
|
101 |
+
|
102 |
+
print(pipe(prompt_template)[0]['generated_text'])
|
103 |
+
```
|
104 |
+
|
105 |
+
## Provided files
|
106 |
+
|
107 |
+
**baichuan-vicuna-7b-GPTQ-4bit-128g.no-act.order.safetensors**
|
108 |
+
|
109 |
+
This will work with AutoGPTQ, ExLlama, and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
|
110 |
+
|
111 |
+
It was created with group_size 128 to increase inference accuracy, but without --act-order (desc_act) to increase compatibility and improve inference speed.
|
112 |
+
|
113 |
+
* `baichuan-vicuna-7b-GPTQ-4bit-128g.no-act.order.safetensors`
|
114 |
+
* Works with AutoGPTQ in CUDA or Triton modes.
|
115 |
+
* LLaMa models also work with [ExLlama](https://github.com/turboderp/exllama}, which usually provides much higher performance, and uses less VRAM, than AutoGPTQ.
|
116 |
+
* Works with GPTQ-for-LLaMa in CUDA mode. May have issues with GPTQ-for-LLaMa Triton mode.
|
117 |
+
* Works with text-generation-webui, including one-click-installers.
|
118 |
+
* Parameters: Groupsize = 128. Act Order / desc_act = False.
|
119 |
+
|
120 |
+
<!-- footer start -->
|
121 |
+
## Discord
|
122 |
+
|
123 |
+
For further support, and discussions on these models and AI in general, join us at:
|
124 |
+
|
125 |
+
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
|
126 |
+
|
127 |
+
## Thanks, and how to contribute.
|
128 |
+
|
129 |
+
Thanks to the [chirper.ai](https://chirper.ai) team!
|
130 |
+
|
131 |
+
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
|
132 |
+
|
133 |
+
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
|
134 |
+
|
135 |
+
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
|
136 |
+
|
137 |
+
* Patreon: https://patreon.com/TheBlokeAI
|
138 |
+
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
139 |
+
|
140 |
+
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
|
141 |
+
|
142 |
+
**Patreon special mentions**: Mano Prime, Fen Risland, Derek Yates, Preetika Verma, webtim, Sean Connelly, Alps Aficionado, Karl Bernard, Junyu Yang, Nathan LeClaire, Chris McCloskey, Lone Striker, Asp the Wyvern, Eugene Pentland, Imad Khwaja, trip7s trip, WelcomeToTheClub, John Detwiler, Artur Olbinski, Khalefa Al-Ahmad, Trenton Dambrowitz, Talal Aujan, Kevin Schuppel, Luke Pendergrass, Pyrater, Joseph William Delisle, terasurfer , vamX, Gabriel Puliatti, David Flickinger, Jonathan Leane, Iucharbius , Luke, Deep Realms, Cory Kujawski, ya boyyy, Illia Dulskyi, senxiiz, Johann-Peter Hartmann, John Villwock, K, Ghost , Spiking Neurons AB, Nikolai Manek, Rainer Wilmers, Pierre Kircher, biorpg, Space Cruiser, Ai Maven, subjectnull, Willem Michiel, Ajan Kanaga, Kalila, chris gileta, Oscar Rangel.
|
143 |
+
|
144 |
+
Thank you to all my generous patrons and donaters!
|
145 |
+
|
146 |
+
<!-- footer end -->
|
147 |
+
|
148 |
+
# Original model card: Fire Balloon's Baichuan Vicuna 7B
|
149 |
+
|
150 |
+
|
151 |
+
# baichuan-vicuna-7b
|
152 |
+
baichuan-vicuna-7b is a chat model supervised finetuned on vicuna sharegpt data.
|
153 |
+
- The foundation model is [baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B), which is a large-scale pre-training model developed by Baichuan Intelligence allowing for commercial purposes.
|
154 |
+
- The finetuning data includes [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/blob/main/ShareGPT_V3_unfiltered_cleaned_split.json), mixed with [COT](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT) and [Leetcode](https://www.kaggle.com/datasets/erichartford/leetcode-solutions), which are aimed to improve the model's reasoning and coding ability (the data mixing strategy is inspired by [TULU](https://arxiv.org/abs/2306.04751)).
|
155 |
+
- The training code: https://huggingface.co/fireballoon/baichuan-vicuna-7b/blob/main/train_vicuna.py, which is based on [FastChat](https://github.com/lm-sys/FastChat).
|
156 |
+
|
157 |
+
[中文说明](#chinese-model-card)
|
158 |
+
|
159 |
+
**[New]** [baichuan-vicuna-chinese-7b](https://huggingface.co/fireballoon/baichuan-vicuna-chinese-7b), baichuan finetuned on both English and Chinese ShareGPT.
|
160 |
+
|
161 |
+
# Load model
|
162 |
+
```python
|
163 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
164 |
+
tokenizer = AutoTokenizer.from_pretrained("fireballoon/baichuan-vicuna-7b", use_fast=False)
|
165 |
+
model = AutoModelForCausalLM.from_pretrained("fireballoon/baichuan-vicuna-7b")
|
166 |
+
```
|
167 |
+
|
168 |
+
# Inference
|
169 |
+
Inference with FastChat:
|
170 |
+
```
|
171 |
+
python3 -m fastchat.serve.cli --model-path fireballoon/baichuan-vicuna-7b
|
172 |
+
```
|
173 |
+
|
174 |
+
Inference with Transformers:
|
175 |
+
```ipython
|
176 |
+
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer
|
177 |
+
>>> tokenizer = AutoTokenizer.from_pretrained("fireballoon/baichuan-vicuna-7b", use_fast=False)
|
178 |
+
>>> model = AutoModelForCausalLM.from_pretrained("fireballoon/baichuan-vicuna-7b").half().cuda()
|
179 |
+
>>> streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
|
180 |
+
>>> instruction = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {} ASSISTANT:"
|
181 |
+
>>> prompt = instruction.format("five tips to help with sleep") # user message
|
182 |
+
>>> generate_ids = model.generate(tokenizer(prompt, return_tensors='pt').input_ids.cuda(), max_new_tokens=2048, streamer=streamer)
|
183 |
+
'''
|
184 |
+
1. Create a relaxing bedtime routine, such as winding down with a warm bath or reading a book before bed.
|
185 |
+
2. Avoid caffeine and alcohol close to bedtime, as they can disrupt sleep.
|
186 |
+
3. Use blackout curtains or shades to block out light from the outside, which can interfere with your body's natural sleep rhythms.
|
187 |
+
4. Keep your bedroom cool and dark, and avoid using electronic devices at least an hour before bed.
|
188 |
+
5. Regular exercise and a healthy diet can also help improve sleep quality.
|
189 |
+
'''
|
190 |
+
```
|
191 |
+
|
192 |
+
|
193 |
+
# Test examples on FastChat Eval
|
194 |
+
Test example on [FastChat Eval](https://github.com/lm-sys/FastChat/blob/main/fastchat/eval/table/question.jsonl)
|
195 |
+
|
196 |
+
Please refer to [discussions 2](https://huggingface.co/fireballoon/baichuan-vicuna-7b/discussions/2) for comparative evaluation of Baichuan-vicuna-7b and Vicuna-13b by GPT-4.
|
197 |
+
|
198 |
+
```
|
199 |
+
Q: How can I improve my time management skills?
|
200 |
+
|
201 |
+
A: There are many different ways to improve your time management skills. Here are a few strategies that you may find helpful:
|
202 |
+
1 Prioritize your tasks: Make a list of all the tasks you need to complete and prioritize them based on importance and urgency. This will help you focus on the most important tasks and avoid getting bogged down by less critical items.
|
203 |
+
2 Use a time management planner: A planner can help you organize your time by breaking down your tasks into smaller, more manageable chunks. This can help you stay focused and on track.
|
204 |
+
3 Set clear boundaries: Setting clear boundaries between work time and personal time can help you avoid distractions and stay focused on your tasks. This may involve designating certain times of day or days of the week for work and other times for rest and leisure.
|
205 |
+
4 Take care of your physical and mental health: Taking care of your physical and mental health can help you be more productive and focused. This may involve getting enough sleep, eating a healthy diet, and taking breaks to relax and recharge.
|
206 |
+
Remember that improving your time management skills takes practice and patience. Don't be discouraged if you make mistakes or experience setbacks. Keep trying and you will eventually develop better time management habits that will help you be more productive and successful.
|
207 |
+
```
|
208 |
+
```
|
209 |
+
Q: Can you help me write a formal email to a potential business partner proposing a joint venture?
|
210 |
+
|
211 |
+
A: Sure! Here is a sample formal email that you could use as a template to propose a joint venture with a
|
212 |
+
potential business partner:
|
213 |
+
|
214 |
+
Subject: Proposal for Joint Venture
|
215 |
+
|
216 |
+
Dear [Business Partner],
|
217 |
+
|
218 |
+
I hope this email finds you well. I am writing to propose a joint venture between our companies. We believe that a collaboration between our companies could bring significant benefits to both of us, and we would like to explore the possibility of forming a partnership.
|
219 |
+
|
220 |
+
Our companies have complementary strengths, and we believe that a joint venture could allow us to leverage those strengths to create new products, services, and opportunities. We believe that a joint venture could also provide increased revenue and growth for both of our companies.
|
221 |
+
|
222 |
+
In order to explore the possibility of a joint venture, we would like to schedule a call to discuss the details of our proposal. During the call, we would like to discuss the terms of the joint venture, including the specific roles and responsibilities of each partner, the financial terms of the agreement, and any other relevant details.
|
223 |
+
|
224 |
+
I would be happy to schedule a call with you at your convenience. In the meantime, please feel free to contact me with any questions or concerns you may have about our proposal.
|
225 |
+
|
226 |
+
Sincerely,
|
227 |
+
|
228 |
+
[Your Name]
|
229 |
+
```
|
230 |
+
```
|
231 |
+
Q: Implement a binary search algorithm to find a specific element in a sorted array.
|
232 |
+
|
233 |
+
A: Here is an implementation of the binary search algorithm to find a specific element in a sorted array:
|
234 |
+
public static int binarySearch(int[] arr, int target) {
|
235 |
+
int left = 0;
|
236 |
+
int right = arr.length - 1;
|
237 |
+
while (left <= right) {
|
238 |
+
int mid = left + (right - left) / 2;
|
239 |
+
if (arr[mid] < target) {
|
240 |
+
left = mid + 1;
|
241 |
+
} else if (arr[mid] > target) {
|
242 |
+
right = mid - 1;
|
243 |
+
} else {
|
244 |
+
return mid;
|
245 |
+
}
|
246 |
+
}
|
247 |
+
return -1; // target not found
|
248 |
+
}
|
249 |
+
|
250 |
+
Here is how the binary search algorithm works:
|
251 |
+
1 Initialize two pointers, left and right, to the beginning and end of the array, respectively.
|
252 |
+
2 While the left pointer is less than or equal to the right pointer, perform the following steps:
|
253 |
+
1 Calculate the middle index, mid, by dividing the left pointer and the right pointer.
|
254 |
+
2 Check if the middle element, arr[mid], is less than or equal to the target value.
|
255 |
+
3 If the middle element is less than the target value, update the left pointer to mid + 1.
|
256 |
+
4 If the middle element is greater than the target value, update the right pointer to mid - 1.
|
257 |
+
5 If the middle element is equal to the target value, the target element has been found, so return the middle index.
|
258 |
+
3 If the target element is not found, return -1.
|
259 |
+
|
260 |
+
This algorithm has a runtime complexity of O(log n) and a space complexity of O(1), making it an efficient way to find a specific element in a sorted array.
|
261 |
+
```
|
262 |
+
|
263 |
+
---
|
264 |
+
|
265 |
+
# Chinese model card
|
266 |
+
|
267 |
+
baichuan-vicuna-7b是在vicuna sharegpt数据上全参数微调的对话模型。
|
268 |
+
- 基座模型是[baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B),由百川智能开发的可商用大规模预训练模型。
|
269 |
+
- 微调数据包括[ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/blob/main/ShareGPT_V3_unfiltered_cleaned_split.json),混合一定比例的[COT](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT)和[Leetcode](https://www.kaggle.com/datasets/erichartford/leetcode-solutions)数据以提升模型的推理和代码能力(数据混合策略受到[TULU](https://arxiv.org/abs/2306.04751)研究成果的启发)。
|
270 |
+
- 训练代码:https://huggingface.co/fireballoon/baichuan-vicuna-7b/blob/main/train_vicuna.py, 代码基于[FastChat](https://github.com/lm-sys/FastChat)。
|
271 |
+
|
272 |
+
|
273 |
+
# Test examples on Ziyan Eval
|
274 |
+
|
275 |
+
在[Ziya-Eval-Chinese](https://huggingface.co/datasets/IDEA-CCNL/Ziya-Eval-Chinese)上的测试例子
|
276 |
+
```
|
277 |
+
Q: 鸿门宴是什么意思
|
278 |
+
A: 鸿门宴是一个汉语成语,意思是指在公元前206年,项羽、刘邦两军在鸿门(今陕西临潼区东北)一带展开的一场“生死对决”。当时,项羽的叔父项伯在会见刘邦的手下时,将刘邦已经逃走的情报透露给项羽,使得刘邦免于被项羽所杀。项羽因此怀恨在心,准备第二天在鸿门设宴,邀请刘邦赴宴,然后袭击刘邦,然而,项羽手下的谋士范增早已看出刘邦的“龙气象” 为刘邦是“沛公刘邦是高祖(刘邦的谥号),他将统率四海,认为杀掉沛公是上策”。宴会期间,项羽的堂弟项庄在席间舞剑,企图将刘邦刺杀。最后,刘邦借故离开,前往韩信大军的营地。鸿门宴上,刘邦因项伯的缘故逃过一劫,但是,他也失项羽议和的机会,为以后楚汉之争埋下了隐患。
|
279 |
+
|
280 |
+
Q: 中国目前通用的火警电话是
|
281 |
+
A: 目前,中国通用的是119火警电话。
|
282 |
+
```
|
283 |
+
```
|
284 |
+
Q: 晚上睡不着应该怎么办
|
285 |
+
|
286 |
+
A:失眠是一种症状,而不是一种疾病。治疗失眠的目标是解决其潜在原因。失眠的一些常见原因包括:
|
287 |
+
|
288 |
+
*睡眠不足
|
289 |
+
*压力
|
290 |
+
*心理健康问题 (焦虑,抑郁等)
|
291 |
+
*药物 (处方和非处方)
|
292 |
+
*环境因素 (噪音,光线,温度等)
|
293 |
+
|
294 |
+
以下是一些通常可以帮助您睡眠的方法:
|
295 |
+
|
296 |
+
*睡觉前4-6个小时避免刺激物 (咖啡因,尼古丁,酒精)
|
297 |
+
*睡前冥想以减少压力和焦虑
|
298 |
+
*睡觉前不要看亮屏幕 (电脑,手机,��板电脑等)
|
299 |
+
*睡觉前至少一个小时不要运动
|
300 |
+
*睡觉前不要吃食物
|
301 |
+
*在睡觉前放松并平静下来 (看看书,喝杯牛奶等)
|
302 |
+
|
303 |
+
如果这些技术不起作用,或者如果失眠是由药物或其他医学状况引起的,您应该联系您的医生。
|
304 |
+
|
305 |
+
```
|