langchain error
context = """George Washington (February 22, 1732[b] β December 14, 1799) was an American military officer, statesman,
and Founding Father who served as the first president of the United States from 1789 to 1797."""
print(llm_context_chain.predict(instruction="When was George Washington president?", context=context).lstrip())
give me that
150 response = self.pipeline(prompt)
151 if self.pipeline.task == "text-generation":
152 # Text generation return includes the starter text.
--> 153 text = response[0]["generated_text"][len(prompt) :]
154 elif self.pipeline.task == "text2text-generation":
155 text = response[0]["generated_text"]
TypeError: string indices must be integers
Looks like you didn't set return_full_text=True as in the example
https://github.com/databrickslabs/dolly/blob/master/examples/langchain.py#L60
Is there an example of how it is used with RetrievalQA? used the exact same setup in the model card, wrapping pipe with HuggingFacePipeline, and pass it on to RetrievalQA.from_chain_type as the llm arg. However, getting the error " The following model_kwargs
are not used by the model: ['return_full_text']"
Show how you are loading the pipeline
nothing really fancy, pretty standard as the examples from the model card ...
pipe = pipeline(model=model_path, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", return_full_text=True)
llm = HuggingFacePipeline(pipeline=pipe)
and then using the llm in below:
qa = RetrievalQA.from_chain_type(
llm=llm, chain_type="stuff"
,retriever=retriever
,return_source_documents=True
)
and above error occured when using qa to do document QA as qa({'query': "some random question"})
Hm, this works fine for me, as an example. Are you loading from some local copy?
instruct_pipeline = pipeline(model="databricks/dolly-v2-7b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto",
return_full_text=True, do_sample=False, max_new_tokens=128)
prompt_with_context = PromptTemplate(input_variables=["question", "context"], template="{context}\n\n{question}")
hf_pipe = HuggingFacePipeline(pipeline=instruct_pipeline)
load_qa_chain(llm=hf_pipe, chain_type="stuff", prompt=prompt_with_context)
yes, used a local copy. tried the same code but still got the same error " The following model_kwargs are not used by the model: ['return_full_text']".
II think you have and old copy without the instruct pipeline or something if you get that error. Use my example where you let it download from HF.
I get the same error: " The following model_kwargs are not used by the model: ['return_full_text']".
I am on the latest hugging face and langchain packages
Edit: I am also using a local copy
I had same problem and for me I set to use an older langchain version:
!pip install langchain==0.0.220