Could not parse LLM output.

#23
by Harsh6519 - opened

I am doing Q-A for CSV data using langchain CSV agent.

Here is my code:

import os
import pandas as pd
from langchain_experimental.agents.agent_toolkits import create_csv_agent
from langchain.llms import OpenAI
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from transformers import pipeline
from langchain.llms import HuggingFacePipeline
from langchain.prompts import ChatPromptTemplate

os.environ["OPENAI_API_KEY"] = "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"

df = pd.read_csv("dataset.csv")

tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-xl")
model = AutoModelForSeq2SeqLM.from_pretrained(
"google/flan-t5-xl",
max_length=512
)
pipe = pipeline(
"text2text-generation",
model=model,
tokenizer=tokenizer,
max_length=512,
repetition_penalty=1.15,
)

local_llm = HuggingFacePipeline(pipeline=pipe)
agent = create_csv_agent(
llm=local_llm, path="dataset.csv", verbose=True, handle_parsing_errors=True
)

try:
result = agent.run("How many people have same height?, return answer in text format.")
print("Result->", result)
prnt("Ket", result.keys())
except Exception as e:
print("e===", e)

In here I am getting error as : Could not parse LLM output.
So how to solve this issue?
If anyone have any idea then please help me to solve it out.

Sign up or log in to comment