16-bit version of weights from PharMolix/BioMedGPT-LM-7B
, for easier download / finetuning / model-merging
Code
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
m2 = AutoModelForCausalLM.from_pretrained("PharMolix/BioMedGPT-LM-7B",
torch_dtype=torch.float16,
device_map="auto")
- Downloads last month
- 3
This model does not have enough activity to be deployed to Inference API (serverless) yet.
Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.