File size: 1,579 Bytes
ab2550e 80150d2 ab2550e 4ead056 ab2550e 80150d2 e0fceed ab2550e 80150d2 ab2550e 80150d2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
---
base_model: unsloth/Meta-Llama-3.1-8B
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# MedGPT-Llama3.1-8B-v.1-GGUF
- This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B](https://huggingface.co/unsloth/Meta-Llama-3.1-8B) on an dataset created by [Valerio Job](https://huggingface.co/valeriojob) together with GPs based on real medical data.
- Version 1 (v.1) of MedGPT is the very first version of MedGPT and the training dataset has been kept simple and small with only 60 examples.
- This repo includes the quantized models in the GGUF format. There is a separate repo called [valeriojob/MedGPT-Llama3.1-8B-BA-v.1](https://huggingface.co/valeriojob/MedGPT-Llama3.1-8B-BA-v.1) that includes the default 16bit format of the model as well as the LoRA adapters of the model.
- This model was quantized using [llama.cpp](https://github.com/ggerganov/llama.cpp).
- This model is available in the following quantization formats:
- BF16
- Q4_K_M
- Q5_K_M
- Q8_0
## Model description
This model acts as a supplementary assistance to GPs helping them in medical and admin tasks.
## Intended uses & limitations
The fine-tuned model should not be used in production! This model has been created as a initial prototype in the context of a bachelor thesis.
## Training and evaluation data
The dataset (train and test) used for fine-tuning this model can be found here: [datasets/valeriojob/BA-v.1](https://huggingface.co/datasets/valeriojob/BA-v.1)
## Licenses
- **License:** apache-2.0 |