VishalMysore commited on
Commit
05364df
1 Parent(s): 3b20ca9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -0
README.md CHANGED
@@ -30,6 +30,17 @@ model = AutoModelForCausalLM.from_pretrained("VishalMysore/cookgptlama")
30
  tokenizer = AutoTokenizer.from_pretrained("VishalMysore/cookgptlama")
31
  ```
32
 
 
 
 
 
 
 
 
 
 
 
 
33
 
34
  Then use pipeline to query and interact
35
 
 
30
  tokenizer = AutoTokenizer.from_pretrained("VishalMysore/cookgptlama")
31
  ```
32
 
33
+ or you can load it in 8bit precision
34
+
35
+ ```
36
+ from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
37
+
38
+ model_id = "VishalMysore/cookgptlama"
39
+
40
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
41
+ model_8bit = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto",load_in_8bit=True)
42
+ print(model_8bit.get_memory_footprint())
43
+ ```
44
 
45
  Then use pipeline to query and interact
46