Update README.md
Browse files
README.md
CHANGED
@@ -40,7 +40,7 @@ Model 4-bit Mistral-7B-Instruct-v0.2 finetuned with QLoRA on multiple medical da
|
|
40 |
|
41 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
42 |
|
43 |
-
The model is finetuned on medical data and is intended for research.
|
44 |
|
45 |
## Bias, Risks, and Limitations
|
46 |
|
@@ -60,6 +60,8 @@ Users (both direct and downstream) should be made aware of the risks, biases and
|
|
60 |
Use the code below to get started with the model.
|
61 |
|
62 |
```python
|
|
|
|
|
63 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
64 |
|
65 |
tokenizer = AutoTokenizer.from_pretrained("adriata/med_mistral")
|
|
|
40 |
|
41 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
42 |
|
43 |
+
The model is finetuned on medical data and is intended only for research. It should not be used as a substitute for professional medical advice, diagnosis, or treatment.
|
44 |
|
45 |
## Bias, Risks, and Limitations
|
46 |
|
|
|
60 |
Use the code below to get started with the model.
|
61 |
|
62 |
```python
|
63 |
+
# !pip install -q transformers accelerate bitsandbytes
|
64 |
+
|
65 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
66 |
|
67 |
tokenizer = AutoTokenizer.from_pretrained("adriata/med_mistral")
|