KUETLLM is a zephyr7b-beta finetune, using a dataset with prompts and answers about Khulna University of Engineering and Technology. It was loaded in 8 bit quantization using bitsandbytes. LORA was used to finetune an adapter, which was leter merged with the base unquantized model.

datasets:

Below are the training configurations for the fine-tuning process:

LoraConfig:
r=16,
lora_alpha=16,
target_modules=["q_proj", "v_proj","k_proj","o_proj","gate_proj","up_proj","down_proj"],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM"
TrainingArguments:
per_device_train_batch_size=12,
gradient_accumulation_steps=1,
optim='paged_adamw_8bit',
learning_rate=5e-06 ,
fp16=True,            
logging_steps=10,
num_train_epochs = 1,
output_dir=zephyr_lora_output,
remove_unused_columns=False,

Inferencing:

def process_data_sample(example):
    processed_example = "<|system|>\nYou are a KUET authority managed chatbot, help users by answering their queries about KUET.\n<|user|>\n" + example + "\n<|assistant|>\n"
    return processed_example

inp_str = process_data_sample("Tell me about KUET.")
inputs = tokenizer(inp_str, return_tensors="pt")
generation_config = GenerationConfig(
    do_sample=True,
    top_k=1,
    temperature=0.1,
    max_new_tokens=256,
    pad_token_id=tokenizer.eos_token_id
)

outputs = model.generate(**inputs, generation_config=generation_config)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
23
Safetensors
Model size
7.24B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.