---
title: "Memory threads and resources | Memory"
description: "Learn how Mastra's memory system works with working memory, conversation history, and semantic recall."
---

# Memory threads and resources

Mastra organizes memory into threads, which are records that group related interactions, using two identifiers:

1. **`thread`**: A globally unique ID representing the conversation (e.g., `support_123`). Must be unique across all resources.
2. **`resource`**: The user or entity that owns the thread (e.g., `user_123`, `org_456`).

The `resource` is especially important for [resource-scoped memory](./working-memory#resource-scoped-memory-default), which allows memory to persist across all threads associated with the same user or entity.

```typescript {4} showLineNumbers
const stream = await agent.stream("message for agent", {
  memory: {
    thread: "conversation-123",
    resource: "user-123",
  },
});
```

:::warning

Even with memory configured, agents won’t store or recall information unless both `thread` and `resource` are provided.

:::

> Studio sets `thread` and `resource` IDs automatically. In your own application, you must provide them manually as part of each `.generate()` or `.stream()` call.

### Thread title generation

Mastra can automatically generate descriptive thread titles based on the user's first message. This feature is disabled by default. Enable it by setting `generateTitle` to `true`. This improves organization and makes it easier to display conversations in your UI.

```typescript {3-7} showLineNumbers
export const testAgent = new Agent({
  memory: new Memory({
    options: {
      generateTitle: true, // Explicitly enable automatic title generation
    },
  }),
});
```

> Title generation runs asynchronously after the agent responds and does not affect response time. See the [full configuration reference](/reference/v1/memory/memory-class) for details and examples.

#### Optimizing title generation

Titles are generated using your agent's model by default. To optimize cost or behavior, provide a smaller `model` and custom `instructions`. This keeps title generation separate from main conversation logic.

```typescript {5-9} showLineNumbers
export const testAgent = new Agent({
  // ...
  memory: new Memory({
    options: {
      threads: {
        generateTitle: {
          model: "openai/gpt-4.1-nano",
          instructions:
            "Generate a concise title based on the user's first message",
        },
      },
    },
  }),
});
```

#### Dynamic model selection and instructions

You can configure thread title generation dynamically by passing functions to `model` and `instructions`. These functions receive the `requestContext` object, allowing you to adapt title generation based on user-specific values.

```typescript {7-16} showLineNumbers
export const testAgent = new Agent({
  // ...
  memory: new Memory({
    options: {
      threads: {
        generateTitle: {
          model: ({ requestContext }) => {
            const userTier = requestContext.get("userTier");
            return userTier === "premium"
              ? "openai/gpt-5.1"
              : "openai/gpt-4.1-nano";
          },
          instructions: ({ requestContext }) => {
            const language = requestContext.get("userLanguage") || "English";
            return `Generate a concise, engaging title in ${language} based on the user's first message.`;
          },
        },
      },
    },
  }),
});
```
