Keep getting "Could not parse LLM output" and agents do not seem to be communicating properly

#54
by jwr1015 - opened

Keep getting "Could not parse LLM output" when I try to build an agent and run a query and the agents do not seem to be interacting properly

from langchain.chains import LLMMathChain
from langchain.agents import Tool
from langchain.agents import load_tools
from langchain.agents import AgentType
from langchain.chains.summarize import load_summarize_chain
from langchain.agents import initialize_agent
from transformers import pipeline
from langchain.embeddings import HuggingFaceEmbeddings
from langchain import PromptTemplate, LLMChain
from langchain.llms import HuggingFacePipeline
from langchain.llms import HuggingFacePipeline
from langchain.llms import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
from langchain.llms import HuggingFacePipeline

generate_text = pipeline(model="databricks/dolly-v2-12b", torch_dtype=torch.bfloat16,
                         trust_remote_code=True, device_map="auto", return_full_text=True)

hf_pipeline = HuggingFacePipeline(pipeline=generate_text)

tools = []
llm = hf_pipeline

prompt = PromptTemplate(
    input_variables=["instruction"],
    template="{instruction}"
)

llm_chain = LLMChain(llm=llm, prompt=prompt)

genneral_tool = Tool(
    name='Language Model',
    func=llm_chain.run,
    description='use this tool for general purpose queries and logic'
)

## QA closed book
prompt = PromptTemplate(template="Question: {question}\nAnswer:", input_variables=["question"])
qachain = LLMChain(llm=llm, prompt=prompt)
QA_general = Tool(
    name='Question Answer',
    func=qachain.run,
    description='use this tool when answering questions'
)

### Math 
llm_math = LLMMathChain(llm=llm, verbose=True)
llm_math_tool = Tool(
    name='Calculator',
    func=llm_math.run,
    description='use this tool to perform calculations'
)

### summarize
summarize_chain = load_summarize_chain(llm, chain_type="map_reduce")
summarize_general = Tool(
    name='summarization Tool',
    func=summarize_chain.run,
    description='use this tool when summarizing information'
)


# when giving tools to LLM, we must pass as list of tools
tools.extend([llm_math_tool, genneral_tool, summarize_general, QA_general])

agent = initialize_agent(
    agent="zero-shot-react-description",
    tools=tools,
    llm=llm,
    verbose=True,
    max_iterations=20
)

  agent(r"""
What is 2 * 2 equal too?
  """)
Databricks org

It just means the LLM response isn't quite following directions enough for the chain to find what it's looking for. It's possible Dolly doesn't do well here, or needs different prompting.

by chance do you have any examples of langchain agents with dolly 2.0?

Databricks org

I tried with the SQLChain but didn't get good results (not with OpenAI either, really). Here's an example with a QA chain that works well though: https://www.dbdemos.ai/demo.html?demoName=llm-dolly-chatbot

@srowen , just wanted to take a moment and thank you for your time. Keep up the important work :)

srowen changed discussion status to closed

Sign up or log in to comment