Edit model card

Model Card for Model ID

Model Details

Model Description

LORA adapters of meta-llama/Meta-Llama-3.1-8B-Instruct, trained on 100 context samples from the HotpotQA dataset using the RAFT method, enable the model to better reason through the context and return more accurate outcomes.

Evaluation

Evaluated on FULL validation set of HotpotQA.

type exatch_match f1 precision recall
pretrained 0.2980 0.3979 0.4116 0.5263
finetuned 0.3606 0.4857 0.4989 0.5318

Finetuned version increases 22% on F1 and 15% on average

Usage

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline

model_id = "phatvo/Meta-Llama3.1-8B-Instruct-RAFT"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
  model_id, device_map="auto", revision="main", trust_remote_code=True)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)

inst = "Given the question and context below, thinking in logical reasoning way for your answer.\
Please provide only your answer in this format: CoT Answer: {reason} <ANSWER>: {answer}."
context = ""
question = ""
prompt = f"{context}\n{question}"

chat = [
        {"role": "system", "content": inst},
        {"role": "user", "content": prompt},
    ]
prompt = tokenizer.apply_chat_template(chat, tokenize=False)

output = pipe(prompt,
  temperature=0.001, 
  max_new_tokens=1024, # recommended to set it more than 800 
  return_full_text=False,
  do_sample=True)

print(output[0]["generated_text"])
# CoT Answer: thoughts... <ANSWER>: final_answer...
Downloads last month
8
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train phatvo/Meta-Llama3.1-8B-Instruct-RAFT

Collection including phatvo/Meta-Llama3.1-8B-Instruct-RAFT