Getting gibberish responses

#1
by jaycann2 - opened

Hello,

I've just started using LLAMA 3 models in my LangChain code, and so far I'm getting gibberish responses. Currently I'm able to use llama2 models via llama.cpp (python) - but when swapping out for LLAMA 3, I get the trash responses. I followed the prompting instructions and I seem to be up to date on the llama.cpp version (by reinstalling llama-cpp-python). I'm using GGUF models only and was previously using a sqlcoder-7b-2 model without issue.

Simply swapping the GGUF file to this one is breaking my code. Any thoughts?

EDIT - here's a sample output from running llama-3-sqlcoder-8b-Q6_K.gguf:

image.png

hmm, no thoughts off the top of my head, that's weird. I may be able to try the full model to see if i get the same or if it's a GGUF issue

Hey @bartowski - here's the code I used to compare the the llama2 and llama3 sqlcoder models, where I get gibberish for the llama3 version. I copied the llama3 prompt from the base repo (https://huggingface.co/defog/llama-3-sqlcoder-8b), but there still must be something wrong with what I'm doing. Below is an example that compares the output of sqlcoder-7b-2.Q6_K and llama-3-sqlcoder-8b-Q6_K. This example assumes you have the Chinook DB SQLite file in your env and have set the paths to it along with your GGUF files (I'm calling the LLAMA2 version "SQL Coder 2" and LLAMA3 is "SQL Coder 3"):

import json

from langchain_community.llms import LlamaCpp
from langchain_community.utilities import SQLDatabase
from langchain_core.messages import AIMessage, SystemMessage
from langchain_core.prompts import BasePromptTemplate, PromptTemplate, SystemMessagePromptTemplate
from langchain_core.prompts.chat import (
    ChatPromptTemplate,
    HumanMessagePromptTemplate,
    MessagesPlaceholder
)
from langchain_core.output_parsers import StrOutputParser
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnablePassthrough


############################################## Init SQL DB ###################################################
SQLITE_DB = 'chinook.db'
db_string = f"sqlite:///{SQLITE_DB}"
db = SQLDatabase.from_uri(db_string, sample_rows_in_table_info=0)

def get_schema(_):
    return db.get_table_info()


def run_query(query):
    return db.run(query)


############################################## Init SQL Chains #################################################
N_GPU_LAYERS = 50    # Metal set to 1 is enough.
N_BATCH = 1028       # Should be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.
SQL_MODEL2_PATH = "models/sqlcoder-7b-2.Q6_K.gguf"
SQL_MODEL3_PATH = "models/llama-3-sqlcoder-8b-Q6_K.gguf"

### SQL Coder 2
sql_coder2_llm = LlamaCpp(
    model_path=SQL_MODEL2_PATH,
    n_gpu_layers=N_GPU_LAYERS,
    n_batch=N_BATCH,
    n_ctx=2048,
    f16_kv=True,  # MUST set to True, otherwise you will run into problem after a couple of calls
    verbose=False,
    streaming=False,
    model_kwargs={'do_sample': False, 'num_beams': 5}
)

sql_coder2_prompt = '''### Task
Generate a SQL query to answer [QUESTION]{question}[/QUESTION]

### Instructions
- If you cannot answer the question with the available database schema, return 'I do not know'

### Database Schema
The query will run on a database with the following schema:
{schema}

### Answer
Given the database schema, here is the SQL query that answers [QUESTION]{question}[/QUESTION]
[SQL]
'''

### SQL Coder 3
sql_coder3_llm = LlamaCpp(
    model_path=SQL_MODEL3_PATH,
    n_gpu_layers=N_GPU_LAYERS,
    n_batch=N_BATCH,
    n_ctx=2048,
    f16_kv=True,  # MUST set to True, otherwise you will run into problem after a couple of calls
    verbose=False,
    streaming=False,
    model_kwargs={'do_sample': False, 'num_beams': 5}
)

sql_coder3_prompt = """<|begin_of_text|><|start_header_id|>user<|end_header_id|>

Generate a valid SQL query to answer this question: {question}

DDL statements: {schema}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

The following SQL query best answers the question {question}
```sql
"""

def get_sql_coder_chain(llm, prompt):
    class InputType(BaseModel):
        question: str
    
    sql_chain = (
        RunnablePassthrough.assign(schema=get_schema).with_types(input_type=InputType)
        | ChatPromptTemplate.from_messages([("human", prompt)])
        | llm.bind(stop=["\nSQLResult:", ";", "\nAnswer", "\nHuman", '\nResults'])
        | StrOutputParser()
    )

    return sql_chain


####################################### Test Chains ####################################################
sql_coder2_chain = get_sql_coder_chain(sql_coder2_llm, sql_coder2_prompt)
sql_coder3_chain = get_sql_coder_chain(sql_coder3_llm, sql_coder3_prompt)

question = "How many artists are there?"
print("SQL Coder 2 response:\n------------------------------------")
print(sql_coder2_chain.invoke({'question': question}))
print("\n\n")
print("SQL Coder 3 response:\n------------------------------------")
print(sql_coder3_chain.invoke({'question': question}))

My Output:

image.png

Do you see anything wrong with the prompt, or anything else? Does this give you good responses on your end?

Thanks again!

Jack

Sign up or log in to comment