--- license: apache-2.0 datasets: - Laurent1/MedQuad-MedicalQnADataset_128tokens_max library_name: adapter-transformers tags: - medical --- # Model Card for mpt-7b-instruct2-QLoRa-medical-QA ![image/gif](https://cdn-uploads.huggingface.co/production/uploads/6489e1e3eb763749c663f40c/PUBFPpFxsrWRlkYzh7lwX.gif) This is a QA model for answering medical questions

Foundation Model : https://huggingface.co/ibm/mpt-7b-instruct2
Dataset : https://huggingface.co/datasets/Laurent1/MedQuad-MedicalQnADataset_128tokens_max
The model has been fine tuned with 2 x GPU T4 (RAM : 2 x 14.8GB) + CPU (RAM : 29GB).
## Model Details The model is based upon the foundation model : ibm/mpt-7b-instruct2 (Apache 2.0 License).
It has been tuned with Supervised Fine-tuning Trainer and PEFT LoRa.
### Librairies ### Notebook used for the training You can find it in the files and versions tab or : https://colab.research.google.com/drive/14nxSP5UuJcnIJtEERyk5nehBL3W03FR3?hl=fr => Improvements can be achieved by increasing the number of steps and using the full dataset.
### Direct Use ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6489e1e3eb763749c663f40c/b1Vboznz82PwtN4rLNqGC.png) ## Bias, Risks, and Limitations In order to reduce training duration, the model has been trained only with the first 5100 rows of the dataset.
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Generation of plausible yet incorrect factual information, termed hallucination, is an unsolved issue in large language models.
## Training Details ### Training Data https://huggingface.co/datasets/Laurent1/MedQuad-MedicalQnADataset_128tokens_max #### Training Hyperparameters ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6489e1e3eb763749c663f40c/C6XTGVrn4D1Sj2kc9Dq2O.png) #### Times Training duration : 6287.4s ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6489e1e3eb763749c663f40c/WTQ6v-ruMLF7IevXZDham.png)