16-bit version of weights from PharMolix/BioMedGPT-LM-7B, for easier download / finetuning / model-merging

Code

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

m2 = AutoModelForCausalLM.from_pretrained("PharMolix/BioMedGPT-LM-7B",
                                          torch_dtype=torch.float16,
                                          device_map="auto")
Downloads last month
2
Safetensors
Model size
6.74B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for monsoon-nlp/BioMedGPT-16bit

Finetuned
(1)
this model