```python
from langchain import PromptTemplate, OpenAI, LLMChain

prompt_template = "What is a good name for a company that makes {product}?"

llm = OpenAI(temperature=0)
llm_chain = LLMChain(
    llm=llm,
    prompt=PromptTemplate.from_template(prompt_template)
)
llm_chain("colorful socks")
```

<CodeOutputBlock lang="python">

```
    {'product': 'colorful socks', 'text': '\n\nSocktastic!'}
```

</CodeOutputBlock>

## LLM 链的其他运行方式

除了所有 `Chain` 对象共享的 `__call__` 和 `run` 方法之外，`LLMChain` 还提供了几种调用链逻辑的方式：

- `apply` 允许您对一组输入运行链逻辑：


```python
input_list = [
    {"product": "socks"},
    {"product": "computer"},
    {"product": "shoes"}
]

llm_chain.apply(input_list)
```

<CodeOutputBlock lang="python">

```
    [{'text': '\n\nSocktastic!'},
     {'text': '\n\nTechCore Solutions.'},
     {'text': '\n\nFootwear Factory.'}]
```

</CodeOutputBlock>

- `generate` 与 `apply` 类似，但是它返回一个 `LLMResult` 而不是字符串。`LLMResult` 通常包含有用的生成信息，如令牌使用情况和完成原因。


```python
llm_chain.generate(input_list)
```

<CodeOutputBlock lang="python">

```
    LLMResult(generations=[[Generation(text='\n\nSocktastic!', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nTechCore Solutions.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nFootwear Factory.', generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'prompt_tokens': 36, 'total_tokens': 55, 'completion_tokens': 19}, 'model_name': 'text-davinci-003'})
```

</CodeOutputBlock>

- `predict` 与 `run` 方法类似，区别在于输入键是指定为关键字参数而不是 Python 字典。


```python
Single input example
llm_chain.predict(product="colorful socks")
```

<CodeOutputBlock lang="python">

```
    '\n\nSocktastic!'
```

</CodeOutputBlock>


```python
Multiple inputs example

template = """Tell me a {adjective} joke about {subject}."""
prompt = PromptTemplate(template=template, input_variables=["adjective", "subject"])
llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0))

llm_chain.predict(adjective="sad", subject="ducks")
```

<CodeOutputBlock lang="python">

```
    '\n\nQ: What did the duck say when his friend died?\nA: Quack, quack, goodbye.'
```

</CodeOutputBlock>

## Parsing the outputs

默认情况下，`LLMChain` 不会解析输出，即使底层的 `prompt` 对象具有输出解析器。如果您想在 LLM 输出上应用该输出解析器，请使用 `predict_and_parse` 而不是 `predict`，使用 `apply_and_parse` 而不是 `apply`。

使用 `predict`：


```python
from langchain.output_parsers import CommaSeparatedListOutputParser

output_parser = CommaSeparatedListOutputParser()
template = """List all the colors in a rainbow"""
prompt = PromptTemplate(template=template, input_variables=[], output_parser=output_parser)
llm_chain = LLMChain(prompt=prompt, llm=llm)

llm_chain.predict()
```

<CodeOutputBlock lang="python">

```
    '\n\nRed, orange, yellow, green, blue, indigo, violet'
```

</CodeOutputBlock>

使用 `predict_and_parser`：


```python
llm_chain.predict_and_parse()
```

<CodeOutputBlock lang="python">

```
    ['Red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet']
```

</CodeOutputBlock>

## 从字符串初始化

您还可以直接从字符串模板构建 LLMChain。


```python
template = """Tell me a {adjective} joke about {subject}."""
llm_chain = LLMChain.from_string(llm=llm, template=template)
```


```python
llm_chain.predict(adjective="sad", subject="ducks")
```

<CodeOutputBlock lang="python">

```
    '\n\nQ: What did the duck say when his friend died?\nA: Quack, quack, goodbye.'
```

</CodeOutputBlock>
