Addressing Inconsistencies in Model Outputs: Understanding and Solutions

#13
by shivammehta - opened

When experimenting with this model, I've observed occasional discrepancies in its output. Sometimes it provides the correct response, and sometimes times it doesn't, even when presented with the same or similar questions. I have two inquiries: Why does this occur, and how can we address this issue?
Like Output Arrives at the correct Answer. (Occasionally does not arrive at the correct answer, the behavior is not 100% predictable.

Code -
from huggingface_hub import hf_hub_download
from langchain.llms import LlamaCpp

MODEL_ID = "TheBloke/Mistral-7B-Instruct-v0.1-GGUF""

CONTEXT_WINDOW_SIZE = 4096
MAX_NEW_TOKENS = 1024

model_path = hf_hub_download(
repo_id=MODEL_ID,
filename=MODEL_BASENAME,
resume_download=True,
cache_dir="./models",
)
kwargs = {
"model_path": model_path,
"n_ctx": CONTEXT_WINDOW_SIZE,
"max_tokens": MAX_NEW_TOKENS,
"n_gpu_layers":4
}
llm = LlamaCpp(
model_path=model_path,
temperature=0.1,
n_ctx=4096,
max_tokens=1024,
n_batch=100,
top_p=1,
verbose=True,
n_gpu_layers=100)

agent = create_csv_agent(llm, ['./Data/Employees.csv','./Data/Verticals.csv'], verbose=True)
response = agent.run("Which vertical name has the most number of resignations")
print(response)

image.png

QUERY:

  1. What are the steps / measures to be taken to ensure that there is consistency in the final answer?
  2. We have observed on various runs the paths to arriving at the final answer keep changing. Assuming the initial path that is chosen is a wrong path, how to ensure that the reasoning of the LLM takes corrective measures to finally arrive at the correct answer? (We have seen sometimes these corrective actions are infact taken..)

I am also experiencing the same issue. I opened an issue for it as well: https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/discussions/21

@nlpsingh its probably because of your sampling parameters.

Like temperature, top p, min p, they basically can change output.

Higher usually means a bit more creative and better but it might change the response a bit too much sometimes.

Just lower the params it should be fine. With temp at 0 you will always get the same response but it might be a bit boring

Sign up or log in to comment