Edit model card

PubMed Gemma

Introducing PubMed Gemma a Medical Large Language Model built upon Gemma-7B, an Open Source LLM built by Google.

Model details

  • Model type: GemmaForCausalLM
  • Model type: 7B instruct model
  • Language(s) (NLP): English
  • Hardware Accelerator: T4 GPU x2
  • Total VRAM: 14GB GPU RAM

Inference procedure

Here's how you can run the model inference using 🤗 Transformers:


Citation Information

@misc{,
  author = {Tarun R Jain},
  title = {PubMed Gemma},
  year = {2024},
  publisher = {HuggingFace, AI Planet},
  journal = {Hugging Face repository},
  howpublished = {\url{https://huggingface.co/lucifertrj/pubmed_gemma}}
}
Downloads last month
12
Safetensors
Model size
4.78B params
Tensor type
F32
·
U8
·