adriata commited on
Commit
ab53393
1 Parent(s): aa89c73

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -40,7 +40,7 @@ Model 4-bit Mistral-7B-Instruct-v0.2 finetuned with QLoRA on multiple medical da
40
 
41
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
42
 
43
- The model is finetuned on medical data and is intended for research. However, it should not be used as a substitute for professional medical advice, diagnosis, or treatment.
44
 
45
  ## Bias, Risks, and Limitations
46
 
@@ -60,6 +60,8 @@ Users (both direct and downstream) should be made aware of the risks, biases and
60
  Use the code below to get started with the model.
61
 
62
  ```python
 
 
63
  from transformers import AutoTokenizer, AutoModelForCausalLM
64
 
65
  tokenizer = AutoTokenizer.from_pretrained("adriata/med_mistral")
 
40
 
41
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
42
 
43
+ The model is finetuned on medical data and is intended only for research. It should not be used as a substitute for professional medical advice, diagnosis, or treatment.
44
 
45
  ## Bias, Risks, and Limitations
46
 
 
60
  Use the code below to get started with the model.
61
 
62
  ```python
63
+ # !pip install -q transformers accelerate bitsandbytes
64
+
65
  from transformers import AutoTokenizer, AutoModelForCausalLM
66
 
67
  tokenizer = AutoTokenizer.from_pretrained("adriata/med_mistral")