Edit model card

This model is fine tuned on a very small ABAP dataset . Have used NousResearch/Llama-2-7b-chat-hf as the base model.

Sample code

from transformers import pipeline from transformers import AutoModelForCausalLM, AutoTokenizer

model_path = "smjain/abap-nous-hermes"

model = AutoModelForCausalLM.from_pretrained(model_path)

tokenizer = AutoTokenizer.from_pretrained('NousResearch/llama-2-7b-chat-hf')

prompt = "Write a sample ABAP report" # change to your desired prompt

gen = pipeline('text-generation', model=model, tokenizer=tokenizer,max_new_tokens=256)

result = gen(prompt)

print(result[0]['generated_text'])

Downloads last month
4
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train smjain/abap-nous-hermes