Edit model card

Model Card for Model ID

Model Details

Model Description

  • Developed by: thepinkdrummer

Direct Use

Login

from huggingface_hub import notebook_login notebook_login()

Import the Models

from peft import PeftModel, PeftConfig from transformers import AutoModelForCausalLM, AutoTokenizer

Load the base model and tokenizer

base_model_name = "unsloth/meta-llama-3.1-8b-bnb-4bit" base_model = AutoModelForCausalLM.from_pretrained(base_model_name) tokenizer = AutoTokenizer.from_pretrained(base_model_name)

Load PEFT configuration and the fine-tuned model

peft_model_name = "thepinkdrummer/maayavi" config = PeftConfig.from_pretrained(peft_model_name) model = PeftModel.from_pretrained(base_model, peft_model_name)

Run Chat Model

import torch

def chat_with_model(prompt, max_length=512): inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

with torch.no_grad():
    output = model.generate(
        inputs['input_ids'],
        max_length=max_length,
        num_return_sequences=1,
        temperature=0.7,
        top_p=0.9,
        top_k=50,
        do_sample=True,
        pad_token_id=tokenizer.eos_token_id,
        eos_token_id=tokenizer.eos_token_id
    )
response = tokenizer.decode(output[0], skip_special_tokens=True)
return response

user_input = "Hello! How are you?" response = chat_with_model(user_input) print(response)

Framework versions

  • PEFT 0.13.2
Downloads last month
0
Inference API
Unable to determine this model’s pipeline type. Check the docs .