|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
library_name: transformers |
|
tags: |
|
- 4-bit |
|
- AWQ |
|
- text-generation |
|
- autotrain_compatible |
|
- endpoints_compatible |
|
- medical |
|
datasets: |
|
- Open-Orca/OpenOrca |
|
- pubmed |
|
- medmcqa |
|
- maximegmd/medqa_alpaca_format |
|
base_model: internistai/base-7b-v0.2 |
|
metrics: |
|
- accuracy |
|
pipeline_tag: text-generation |
|
inference: false |
|
quantized_by: Suparious |
|
--- |
|
# internistai/base-7b-v0.2 AWQ |
|
|
|
- Model creator: [internistai](https://huggingface.co/internistai) |
|
- Original model: [base-7b-v0.2](https://huggingface.co/internistai/base-7b-v0.2) |
|
|
|
<img width=30% src="assets_logo.png" alt="logo" title="logo"> |
|
|
|
## Model Summary |
|
|
|
Internist.ai 7b is a medical domain large language model trained by medical doctors to demonstrate the benefits of a **physician-in-the-loop** approach. The training data was carefully curated by medical doctors to ensure clinical relevance and required quality for clinical practice. |
|
|
|
**With this 7b model we release the first 7b model to score above the 60% pass threshold on MedQA (USMLE) and outperfoms models of similar size accross most medical evaluations.** |
|
|
|
This model serves as a proof of concept and larger models trained on a larger corpus of medical literature are planned. Do not hesitate to reach out to us if you would like to sponsor some compute to speed up this training. |
|
|
|
## How to use |
|
|
|
### Install the necessary packages |
|
|
|
```bash |
|
pip install --upgrade autoawq autoawq-kernels |
|
``` |
|
|
|
### Example Python code |
|
|
|
```python |
|
from awq import AutoAWQForCausalLM |
|
from transformers import AutoTokenizer, TextStreamer |
|
|
|
model_path = "solidrust/base-7b-v0.2-AWQ" |
|
system_message = "You are base-7b-v0.2, incarnated as a powerful AI. You were created by internistai." |
|
|
|
# Load model |
|
model = AutoAWQForCausalLM.from_quantized(model_path, |
|
fuse_layers=True) |
|
tokenizer = AutoTokenizer.from_pretrained(model_path, |
|
trust_remote_code=True) |
|
streamer = TextStreamer(tokenizer, |
|
skip_prompt=True, |
|
skip_special_tokens=True) |
|
|
|
# Convert prompt to tokens |
|
prompt_template = """\ |
|
<|im_start|>system |
|
{system_message}<|im_end|> |
|
<|im_start|>user |
|
{prompt}<|im_end|> |
|
<|im_start|>assistant""" |
|
|
|
prompt = "You're standing on the surface of the Earth. "\ |
|
"You walk one mile south, one mile west and one mile north. "\ |
|
"You end up exactly where you started. Where are you?" |
|
|
|
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt), |
|
return_tensors='pt').input_ids.cuda() |
|
|
|
# Generate output |
|
generation_output = model.generate(tokens, |
|
streamer=streamer, |
|
max_new_tokens=512) |
|
``` |
|
|
|
### About AWQ |
|
|
|
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. |
|
|
|
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. |
|
|
|
It is supported by: |
|
|
|
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ |
|
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. |
|
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) |
|
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers |
|
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code |
|
|