You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Llama2 Fine-tuned on MindsDB Docs

This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on the MindsDB documentation dataset.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "ako-oak/llama2-finetuned-mindsdb"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

def chat(prompt):
    inputs = tokenizer(prompt, return_tensors="pt")
    output = model.generate(**inputs, max_length=200)
    return tokenizer.decode(output[0], skip_special_tokens=True)

print(chat("What is the purpose of handlers in MindsDB?"))
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for ako-oak/llama2-finetuned-mindsdb

Finetuned
(782)
this model