LoRA finetune of Phi-3 on aaditya/Llama3-OpenBioLLM-8B generated answers to a subset of openlifescienceai/medmcqa questions using LoRA4context library.

Usage:

prompt = """A 55- year old diabetic patient presents with transient obscuration in vision for 2-3 days followed by sudden loss of vision. Which of the following would be the best test to evaluate the symptoms?
A. Serum ACE levels
B. Quantiferon-Gold TB test
C. Elevated homocysteine levels
D. Serum creatinine levels"""

from huggingface_hub import HfApi, login, hf_hub_download, snapshot_download
snapshot_download(repo_id='JosefAlbers/phi-3-usmle', repo_type="model", local_dir=".")

from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Phi-3-mini-4k-instruct-4bit-no-q-embed", tokenizer_config={'eos_token':'<|end|>'}, adapter_path='adapters')
generate(model, tokenizer, f"<|user|>\n{prompt}\n<|end|>\n<|assistant|>", max_tokens=500)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Inference API (serverless) does not yet support mlx models for this pipeline type.

Dataset used to train JosefAlbers/phi-3-medmcqa-openbiollm