yellowvanilla commited on
Commit
367267d
·
1 Parent(s): 82caa58
Files changed (1) hide show
  1. README.md +13 -0
README.md CHANGED
@@ -42,6 +42,19 @@ Base on ChatGLM2-6B. This model is used to assist in the Sustainability of const
42
 
43
  [More Information Needed]
44
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
  ### Downstream Use [optional]
46
 
47
  <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
 
42
 
43
  [More Information Needed]
44
 
45
+ ```python
46
+ import torch
47
+ from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
48
+ tokenizer = AutoTokenizer.from_pretrained("kkkgg/Chat-SCW", use_fast=False)
49
+ model = AutoModelForCausalLM.from_pretrained("stabilityai/StableBeluga2", torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto")
50
+ system_prompt = "### System:\nYou are an AI that assists me in my analysis of carbon reduction in buildings.\n\n"
51
+ message = "Explan the life cycle stages"
52
+ prompt = f"{system_prompt}### User: {message}\n\n### Assistant:\n"
53
+ inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
54
+ output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=256)
55
+ print(tokenizer.decode(output[0], skip_special_tokens=True))
56
+ ```
57
+
58
  ### Downstream Use [optional]
59
 
60
  <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->