hl-tburns commited on
Commit
cb7022d
1 Parent(s): 8e951de

Update code example in readme to use `device` variable instead of hard coded `cuda`

Browse files

The code example uses `device` for model creation but then uses a hard coded `cuda` for inputs. Updated to use `device` for inputs as well.

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -40,7 +40,7 @@ model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
40
  messages = [{"role": "user", "content": "List the steps to bake a chocolate cake from scratch."}]
41
  input_text=tokenizer.apply_chat_template(messages, tokenize=False)
42
  print(input_text)
43
- inputs = tokenizer.encode(input_text, return_tensors="pt").to("cuda")
44
  outputs = model.generate(inputs, max_new_tokens=100, temperature=0.6, top_p=0.92, do_sample=True)
45
  print(tokenizer.decode(outputs[0]))
46
  ```
 
40
  messages = [{"role": "user", "content": "List the steps to bake a chocolate cake from scratch."}]
41
  input_text=tokenizer.apply_chat_template(messages, tokenize=False)
42
  print(input_text)
43
+ inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
44
  outputs = model.generate(inputs, max_new_tokens=100, temperature=0.6, top_p=0.92, do_sample=True)
45
  print(tokenizer.decode(outputs[0]))
46
  ```