vishal042002 commited on
Commit
b3cd4ba
·
verified ·
1 Parent(s): 7d2b5f1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -24
README.md CHANGED
@@ -37,28 +37,4 @@ Limitations:
37
  The model should not be used as a substitute for professional medical advice.
38
  Responses should be verified by qualified medical professionals.
39
 
40
- RUNNING THROUGH ADAPTER MERGE:
41
 
42
- from transformers import AutoModelForCausalLM, AutoTokenizer
43
- from peft import PeftModel
44
- import torch
45
-
46
- base_model_name = "unsloth/Llama-3.2-3B-Instruct"
47
- base_model = AutoModelForCausalLM.from_pretrained(base_model_name, torch_dtype=torch.float16, device_map="auto")
48
-
49
- adapter_path = "vishal042002/Llama3.2-3b-Instruct-ClinicalSurgery"
50
- base_model = PeftModel.from_pretrained(base_model, adapter_path)
51
-
52
- tokenizer = AutoTokenizer.from_pretrained(base_model_name)
53
-
54
- device = "cuda" if torch.cuda.is_available() else "cpu"
55
- base_model.to(device)
56
-
57
- # Sample usage
58
- input_text = "What is the mortality rate for patients requiring surgical intervention who were unstable preoperatively?"
59
- inputs = tokenizer(input_text, return_tensors="pt").to(device)
60
-
61
- outputs = base_model.generate(**inputs, max_new_tokens=200, temperature=1.5, top_p=0.9)
62
- decoded_output = tokenizer.decode(outputs[0], skip_special_tokens=True)
63
-
64
- print(decoded_output)
 
37
  The model should not be used as a substitute for professional medical advice.
38
  Responses should be verified by qualified medical professionals.
39
 
 
40