ERROR: response: <Response [404]> [llm/end] [1:RunTypeEnum.chain:LLMChain >

#4
by Shamito - opened

import langchain
from langchain import PromptTemplate, LLMChain
from langchain.llms import TextGen

langchain.debug = True

template = """Question: {question}

Answer: Let's think step by step."""

prompt = PromptTemplate(template=template, input_variables=["question"])
llm = TextGen(model_url='https://huggingface.co/TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GGML/blob/main/wizardlm-13b-v1.1-superhot-8k.ggmlv3.q4_0.bin')
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"

llm_chain.run(question)

output:
(myenv) PS C:\Users\abdul\text-generation> & c:/Users/abdul/text-generation/myenv/Scripts/python.exe c:/Users/abdul/text-generation/lan.py
[chain/start] [1:RunTypeEnum.chain:LLMChain] Entering Chain run with input:
{
"question": "What NFL team won the Super Bowl in the year Justin Bieber was born?"
}
[llm/start] [1:RunTypeEnum.chain:LLMChain > 2:RunTypeEnum.llm:TextGen] Entering LLM run with input:
{
"prompts": [
"Question: What NFL team won the Super Bowl in the year Justin Bieber was born?\n\nAnswer: Let's think step by step."
]
}
ERROR: response: <Response [404]>
[llm/end] [1:RunTypeEnum.chain:LLMChain > 2:RunTypeEnum.llm:TextGen] [345.12100000000004ms] Exiting LLM run with output:
{
"generations": [
[
"generation_info": null
}
]
],
"llm_output": null,
"run": null
}
[chain/end] [1:RunTypeEnum.chain:LLMChain] [352.551ms] Exiting Chain run with output:
{
"text": ""
}
(myenv) PS C:\Users\abdul\text-generation> & c:/Users/abdul/text-generation/myenv/Scripts/python.exe c:/Users/abdul/text-generation/lan.py
[chain/start] [1:RunTypeEnum.chain:LLMChain] Entering Chain run with input:
{
"question": "What NFL team won the Super Bowl in the year Justin Bieber was born?"
}
[llm/start] [1:RunTypeEnum.chain:LLMChain > 2:RunTypeEnum.llm:TextGen] Entering LLM run with input:
{
"prompts": [
"Question: What NFL team won the Super Bowl in the year Justin Bieber was born?\n\nAnswer: Let's think step by step."
]
}
ERROR: response: <Response [404]>
[llm/end] [1:RunTypeEnum.chain:LLMChain > 2:RunTypeEnum.llm:TextGen] [411.44ms] Exiting LLM run with output:
{
"generations": [
[
{
"text": "",
"generation_info": null
}
]
],
"llm_output": null,
"run": null
}
[chain/end] [1:RunTypeEnum.chain:LLMChain] [417.442ms] Exiting Chain run with output:
{
"text": ""
}

Don't know, please raise it as an issue with the developers of the Llama.cpp module for LangChain. I've never used it myself.

Sign up or log in to comment