Iker commited on
Commit
ff21edd
•
1 Parent(s): 26a99dd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -127,10 +127,12 @@ pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory
127
  Then you can load the model using
128
 
129
  ```python
 
130
  from transformers import AutoTokenizer, AutoModelForCausalLM
131
 
132
  tokenizer = AutoTokenizer.from_pretrained("HiTZ/GoLLIE-7B")
133
- model = AutoModelForCausalLM.from_pretrained("HiTZ/GoLLIE-7B", trust_remote_code=True)
 
134
  ```
135
 
136
  Read our [🚀 Example Jupyter Notebooks](notebooks/) to learn how to easily define guidelines, generate model inputs and parse the output!
 
127
  Then you can load the model using
128
 
129
  ```python
130
+ import torch
131
  from transformers import AutoTokenizer, AutoModelForCausalLM
132
 
133
  tokenizer = AutoTokenizer.from_pretrained("HiTZ/GoLLIE-7B")
134
+ model = AutoModelForCausalLM.from_pretrained("HiTZ/GoLLIE-7B", trust_remote_code=True, torch_dtype=torch.bfloat16)
135
+ model.to("cuda")
136
  ```
137
 
138
  Read our [🚀 Example Jupyter Notebooks](notebooks/) to learn how to easily define guidelines, generate model inputs and parse the output!