Edit model card
from transformers import AutoTokenizer

import transformers

import torch

model = "newsmediabias/UnBIAS-LLama2-Debiaser-Chat-QLoRA"

tokenizer = AutoTokenizer.from_pretrained(model)

pipeline = transformers.pipeline(

    "text-generation",
    
    model=model,
    
    torch_dtype=torch.float16,
    
    device_map="auto",
)

sys_message = "Task:""

prompt=""

intput_text=""

sequences = pipeline(

    intput_text,
    
    do_sample=True,
    
    top_k=10,
    
    num_return_sequences=1,
    
    eos_token_id=tokenizer.eos_token_id,
    
    max_length=len(prompt)+100,
)

res=sequences[0]['generated_text']
Downloads last month
314

Dataset used to train newsmediabias/UnBIAS-LLama2-Debiaser-Chat-QLoRA

Collection including newsmediabias/UnBIAS-LLama2-Debiaser-Chat-QLoRA