alibidaran
commited on
Commit
•
ebd33aa
1
Parent(s):
3b32952
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
tags:
|
6 |
+
- medical
|
7 |
+
---
|
8 |
+
#### This Model is optimized version of "alibidaran/llama-2-7b-virtual_doctor" for executing on CPU and GPU. It can be easily used on CPU and personal computers.
|
9 |
+
|
10 |
+
## Uses
|
11 |
+
In order to use this model on the CPU, you need to install a few libraries.
|
12 |
+
```python
|
13 |
+
!pip install ctransformers
|
14 |
+
```
|
15 |
+
In the next step you can use this model in your project by using codes below:
|
16 |
+
```python
|
17 |
+
from ctransformers import AutoModelForCausalLM
|
18 |
+
from transformers import AutoTokenizer
|
19 |
+
import torch
|
20 |
+
|
21 |
+
model = AutoModelForCausalLM.from_pretrained("alibidaran/llama-2-7b-virtual_doctor-gguf",hf=True)
|
22 |
+
tokenizer = AutoTokenizer.from_pretrained("alibidaran/llama-2-7b-virtual_doctor")
|
23 |
+
|
24 |
+
prompt = " Hi doctor, I have nose running, I have fever, I often feel tired, what should i do?"
|
25 |
+
text=f"<s> ###Human: {prompt} ###Asistant: "
|
26 |
+
inputs=tokenizer(text,return_tensors='pt').to('cpu')
|
27 |
+
with torch.no_grad():
|
28 |
+
outputs=model.generate(**inputs,max_new_tokens=200,do_sample=True,top_p=0.92,top_k=10,temperature=0.7)
|
29 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
30 |
+
|
31 |
+
```
|