---
sidebar_position: 3
---

# Chat Models

Chat Models are a core component of LangChain.

A chat model is a language model that uses chat messages as inputs and returns chat messages as outputs (as opposed to using plain text).

LangChain has integrations with many model providers (OpenAI, Cohere, Hugging Face, etc.) and exposes a standard interface to interact with all of these models.

LangChain allows you to use models in sync, async, batching and streaming modes and provides other features (e.g., caching) and more.

## [Quick Start](./quick_start)

Check out [this quick start](./quick_start) to get an overview of working with ChatModels, including all the different methods they expose

## [Integrations](/docs/integrations/chat/)

For a full list of all LLM integrations that LangChain provides, please go to the [Integrations page](/docs/integrations/chat/)

## How-To Guides

We have several how-to guides for more advanced usage of LLMs.
This includes:

- [How to cache ChatModel responses](./chat_model_caching)
- [How to use ChatModels that support function calling](./function_calling)
- [How to stream responses from a ChatModel](./streaming)
- [How to track token usage in a ChatModel call](./token_usage_tracking)
- [How to creat a custom ChatModel](./custom_chat_model)
