File size: 2,412 Bytes
aa1c001 9fb7ee3 b51b7fe 9fb7ee3 aa1c001 9fb7ee3 aa1c001 3f24106 a72a8e8 aa1c001 9fb7ee3 aa1c001 a72a8e8 aa1c001 19966dc aa1c001 19966dc aa1c001 19966dc aa1c001 19966dc aa1c001 19966dc deb8993 19966dc deb8993 19966dc aa1c001 deb8993 aa1c001 deb8993 aa1c001 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 |
---
language:
- fr
- en
- de
- es
license: apache-2.0
library_name: transformers
tags:
- medical
datasets:
- nuvocare/MSD_instruct
pipeline_tag: text-generation
---
# Model Card for NuvoChat
## Model Details
NuvoChat is a fine-tuned version of the Mistral-7b-Instruct-v0.2 on a medical domain. The fine-tuning was done with LoRA and a quantized version of the model.
### Model Description
- **Developed by:** [Samuel Caineau, Nuvocare](https://www.linkedin.com/in/samuel-chaineau-734b13122/)
- **Funded by [Nuvocare]:** [Nuvocare](https://www.nuvocare.fr/)
- **Language(s) (NLP):** English, French, Spanish and German
- **Finetuned from model [Mistral 7B Instruct v0.2]:** [Base model](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
## Uses
### Direct Use
NuvoChat is made to assist patients and clinicians by providing relevant, adapted and clear information. The model knows how to adapt effectively its tone and vocabulary absed on the user's background.
This is done by providing the model with a specific template where the status of the user (patient or professionnals) is explicitly provided.
The model can be used for:
- Chatting with patients (with or without a RAG set-up)
- Chatting with clinicians (with or without a RAG set-up)
- Medical explanation translation
### Downstream Use [optional]
The model can be used for text summarization.
## Bias, Risks, and Limitations
The model is trained on an unknown dataset by Mistral and fine-tuned on a multilingual dataset from MSD. The model might have different performances depending on the language used.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("nuvocare/NuvoChat", device = "auto")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
prompt = "[INST] Je suis un patient qui souhaite connaitre des informations sur la chirurgie de la cataracte [/INST]"
input = tokenizer(prompt).to("cuda")
answer = tokenizer.decode(model.generate(**input, max_new_tokens = 200, pad_token = tokenizer.eos_token)[0])
```
## Training Details
### Training Data
You can check dataset card.
### Training Procedure
Trained over 7000 steps with a total batch size of 32 (corresponding to a bit more than 1 epoch) and a sequence length of 2048.
|