Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Palmyra-Med, a powerful LLM designed for healthcare

Model Details

Palmyra-Med is a model built by Writer specifically to meet the needs of the healthcare industry. The leading LLM on biomedical benchmarks, with an average score of 85.87%, outperforming GPT-4, claude Opus, Gemini and Med-PaLM-2 base model and a medically trained human test-taker.

Specialized for Biomedical Applications

Palmyra-Med-70B is meticulously designed to meet the unique linguistic and knowledge demands of the medical and life sciences sectors. It has been fine-tuned on an extensive collection of high-quality biomedical data, ensuring it can comprehend and generate text with precise domain-specific accuracy and fluency.

Model Description

  • Developed by: Writer
  • Model type: Llama
  • Language(s) (NLP): English
  • License: Writer
  • Finetuned from model: Palmyra-X-004

Intended Use

Intended Use Cases Palmyra-X-Med is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.

Out-of-scope Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Writer Open source License. Use in languages other than English**.

**Note: Developers may fine-tune Palmyra-X-Med models for languages beyond English provided they comply with the Writer Open source License and the Acceptable Use Policy.

Use with transformers

You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the generate() function. Let's see examples of both.

import transformers
import torch

model_id = "Writer/Palmyra-Med-70B"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device="auto",
)

messages = [
    {"role": "system", "content": "You are a highly knowledgeable and experienced expert in the healthcare and biomedical field, possessing extensive medical knowledge and practical expertise."},
    {"role": "user", "content": "Does danzhi Xiaoyao San ameliorate depressive-like behavior by shifting toward serotonin via the downregulation of hippocampal indoleamine 2,3-dioxygenase?"},
]

prompt = pipeline.tokenizer.apply_chat_template(
        messages, 
        tokenize=False, 
        add_generation_prompt=True
)

terminators = [
    pipeline.tokenizer.eos_token_id,
    pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = pipeline(
    prompt,
    max_new_tokens=256,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.0,
    top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])



Direct Use

Bias, Risks, and Limitations

[More Information Needed]

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

How to Get Started with the Model

Use the code below to get started with the model.

[More Information Needed]

Evaluation

  Clinical KG Medical Genetics Anatomy Pro Medicine College Biology College Medicine MedQA 4 opts PubMedQA MedMCQA Avg
Palmyra-Med-70B 90.94 94 83.7 92.65 94.44 84.39 78.63 79.6 74.44 85.87
Med-PaLM-2 (5-shot) 88.3 90 77.8 95.2 94.4 80.9 79.7 79.2 71.3 84.08
GPT-4 86.04 91 80 93.01 95.14 76.88 78.87 75.2 69.52 82.85
Med-PaLM-1 (Flan-PaLM, 5-shot) 80.4 75 63.7 83.8 88.9 76.3 67.6 79 57.6 74.7
Gemini 76.7 75.8 66.7 77.7 88 69.2 58 70.7 54.3 70.79
GPT-3.5 Turbo 1106 74.71 74 72.79 72.79 72.91 64.73 57.71 72.66 53.79 66
Meditron-70B (Llama) 66.79 69 53.33 71.69 76.38 63 57.1 76.6 46.85 64.52
Mistral-7B-v0.1 68.68 71 55.56 68.38 68.06 59.54 50.82 75.4 48.2 62.85
`

Results

[More Information Needed]

Downloads last month
129
Safetensors
Model size
70.6B params
Tensor type
FP16
·
Invalid base_model specified in model card metadata. Needs to be a model id from hf.co/models.

Collection including Writer/Palmyra-Med-70B