Studeni commited on
Commit
9aa0d73
1 Parent(s): f186bcf

Update README.md

Browse files

## Problem:

If we do not set the device in the pipeline we first get this warning:
```
UserWarning: You are calling .generate() with the `input_ids` being on a device type different than your model's device. `input_ids` is on cpu, whereas the model is on cuda. You may experience unexpected behaviors or slower generation. Please make sure that you have put `input_ids` to the correct device by calling for example input_ids = input_ids.to('cuda') before running `.generate()`.
```
After that we get the error:
```
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)

```

## Solution:

Set devices for all models and in the pipeline method to avoid tensors being on different devices.

Files changed (1) hide show
  1. README.md +7 -3
README.md CHANGED
@@ -108,12 +108,15 @@ pip3 install git+https://github.com/casper-hansen/AutoAWQ.git@1c5ccc791fa2cb0697
108
  from awq import AutoAWQForCausalLM
109
  from transformers import AutoTokenizer
110
 
 
 
 
111
  model_name_or_path = "TheBloke/Mistral-7B-v0.1-AWQ"
112
 
113
  # Load model
114
  model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
115
- trust_remote_code=False, safetensors=True)
116
- tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
117
 
118
  prompt = "Tell me about AI"
119
  prompt_template=f'''{prompt}
@@ -154,7 +157,8 @@ pipe = pipeline(
154
  temperature=0.7,
155
  top_p=0.95,
156
  top_k=40,
157
- repetition_penalty=1.1
 
158
  )
159
 
160
  print(pipe(prompt_template)[0]['generated_text'])
 
108
  from awq import AutoAWQForCausalLM
109
  from transformers import AutoTokenizer
110
 
111
+ device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
112
+
113
+
114
  model_name_or_path = "TheBloke/Mistral-7B-v0.1-AWQ"
115
 
116
  # Load model
117
  model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
118
+ trust_remote_code=False, safetensors=True, device=device)
119
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False, device=device)
120
 
121
  prompt = "Tell me about AI"
122
  prompt_template=f'''{prompt}
 
157
  temperature=0.7,
158
  top_p=0.95,
159
  top_k=40,
160
+ repetition_penalty=1.1,
161
+ device=device,
162
  )
163
 
164
  print(pipe(prompt_template)[0]['generated_text'])