|
--- |
|
base_model: ruslanmv/Medical-Llama3-8B |
|
datasets: |
|
- ruslanmv/ai-medical-chatbot |
|
language: en |
|
license: apache-2.0 |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- ruslanmv |
|
- llama |
|
- trl |
|
- llama-3 |
|
- instruct |
|
- finetune |
|
- chatml |
|
- DPO |
|
- RLHF |
|
- gpt4 |
|
- distillation |
|
- heathcare |
|
- medical |
|
- clinical |
|
- med |
|
- lifescience |
|
- Pharmaceutical |
|
- Pharma |
|
- llama-cpp |
|
- gguf-my-repo |
|
widget: |
|
- example_title: Medical-Llama3-8B |
|
messages: |
|
- role: system |
|
content: You are an expert and experienced from the healthcare and biomedical |
|
domain with extensive medical knowledge and practical experience. |
|
- role: user |
|
content: How long does it take for newborn jaundice to go away? |
|
output: |
|
text: Newborn jaundice, also known as neonatal jaundice, is a common condition |
|
in newborns where the yellowing of the skin and eyes occurs due to an elevated |
|
level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when |
|
red blood cells break down. In most cases, newborn jaundice resolves on its |
|
own without any specific treatment. The duration of newborn jaundice can vary |
|
depending on several factors such as the underlying cause, gestational age at |
|
birth, and individual variations in bilirubin metabolism. Here are some general |
|
guidelines |
|
model-index: |
|
- name: Medical-Llama3-8B |
|
results: [] |
|
--- |
|
|
|
# genevera/Medical-Llama3-8B-Q6_K-GGUF |
|
This model was converted to GGUF format from [`ruslanmv/Medical-Llama3-8B`](https://huggingface.co/ruslanmv/Medical-Llama3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. |
|
Refer to the [original model card](https://huggingface.co/ruslanmv/Medical-Llama3-8B) for more details on the model. |
|
|
|
## Use with llama.cpp |
|
Install llama.cpp through brew (works on Mac and Linux) |
|
|
|
```bash |
|
brew install llama.cpp |
|
|
|
``` |
|
Invoke the llama.cpp server or the CLI. |
|
|
|
### CLI: |
|
```bash |
|
llama-cli --hf-repo genevera/Medical-Llama3-8B-Q6_K-GGUF --hf-file medical-llama3-8b-q6_k.gguf -p "The meaning to life and the universe is" |
|
``` |
|
|
|
### Server: |
|
```bash |
|
llama-server --hf-repo genevera/Medical-Llama3-8B-Q6_K-GGUF --hf-file medical-llama3-8b-q6_k.gguf -c 2048 |
|
``` |
|
|
|
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. |
|
|
|
Step 1: Clone llama.cpp from GitHub. |
|
``` |
|
git clone https://github.com/ggerganov/llama.cpp |
|
``` |
|
|
|
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). |
|
``` |
|
cd llama.cpp && LLAMA_CURL=1 make |
|
``` |
|
|
|
Step 3: Run inference through the main binary. |
|
``` |
|
./llama-cli --hf-repo genevera/Medical-Llama3-8B-Q6_K-GGUF --hf-file medical-llama3-8b-q6_k.gguf -p "The meaning to life and the universe is" |
|
``` |
|
or |
|
``` |
|
./llama-server --hf-repo genevera/Medical-Llama3-8B-Q6_K-GGUF --hf-file medical-llama3-8b-q6_k.gguf -c 2048 |
|
``` |
|
|