Edit model card

This Model was Trained on Custom Arabic Dataset

How to use the Model

1 Use a pipeline as a high-level helper

from transformers import pipeline

pipe = pipeline("text-generation", model="EngTig/llama-2-7b-Arabic-medical")

2 Load model directly

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("EngTig/llama-2-7b-Arabic-medical") model = AutoModelForCausalLM.from_pretrained("EngTig/llama-2-7b-Arabic-medical")

Downloads last month
6
Inference API
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.