Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

How to run the model ?

Load base model

model = AutoModelForCausalLM.from_pretrained( "Tapan101/Llama-2-7b-Medical-chat-finetune" )

Load LLaMA tokenizer

tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = "right" # Fix weird overflow issue with fp16 training

import logging from transformers import pipeline

Suppress warning messages

logging.getLogger("transformers.generation_utils").setLevel(logging.ERROR)

Run text generation pipeline with our next model

prompt = "What is dance therapy ?" #How does dysmenorrhea is diagnosised through allopathic pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200) result = pipe(f"[INST] {prompt} [/INST]") generated_text = result[0] print(generated_text)

Downloads last month
4
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.