---
title: 'AI Toolkit'
description: 'Easily integrate your LLM into AG Grid to control Grid State via natural language by leveraging the new AI Toolkit APIs to generate structured outputs.'
enterprise: true
---

Easily integrate AG Grid with your own LLM, enabling end users to query and manipulate grid state via natural language.

The example below demonstrates a chat application, built using the AI Toolkit APIs and integrated with ChatGPT's `gpt-5-mini` model, that allows for manipulation of grid state via natural language. 

This example does not maintain conversation state. If the LLM responds with a question, please update your initial query, instead of only answering the question. 

Suggested prompts: 

- "Show me all the gold medals won by the USA"
- "Sort the competitors with the youngest first"
- "Group by country and show the total number of medals won"

{% gridExampleRunner title="Natural Language Grid State Management" name="natural-language-grid-state" exampleHeight=600 /%}

## How It Works

[Structured Outputs](https://platform.openai.com/docs/guides/structured-outputs) is an LLM feature that ensures model responses adhere to a supplied [JSON Schema](https://json-schema.org/). Structured Outputs are supported by many LLMs, including ChatGPT and Gemini.

The AI Toolkit provides a `getStructuredSchema` API that generates a structured schema, based on [grid state](./grid-state/). These outputs can be passed to an LLM, which can then generate valid responses that can be passed directly to the `setState` API method. This ensures reliable, schema-aligned instructions for updating or manipulating the grid based on natural language input.

The schema is made up of a series of "features", each representing a different aspect of the grid that can be manipulated. The following features are currently supported:

- [Filtering](./filtering/)
- [Sorting](./row-sorting/)
- [Aggregation](./aggregation/)
- [Pivoting](./pivoting/)
- [Row Grouping](./grouping/)
- [Column Visibility](./column-properties/#reference-display-hide/)
- [Column Sizing](./column-sizing/)

### Architecture

At a high level, the AI Toolkit works as follows:

1. **User Input Capture:** End user enters a natural language query.

2. **Prompt Construction (Client/Server):** The application gathers three elements:
   1. The user query
   2. The current grid state, via `gridApi.getState()`
   3. The structured schema of grid state, via `gridApi.getStructuredSchema()`

3. **LLM Service Request:** The complete prompt (including the structured schema) is sent to the LLM endpoint (e.g., OpenAI, Gemini).

4. **Response Processing & Validation:** The LLM returns a JSON object conforming to the schema.  

5. **State Application:** The validated JSON is passed to `gridApi.setState()`.

{% imageCaption imagePath="resources/ai-toolkit-sequence-diagram.svg" alt="High Contrast Theme" constrained=true maxWidth="50rem" enableDarkModeFilter=true /%}

{% note %}
If you wish to integrate with an LLM that does not support structured data natively, you can still use the schema to validate and parse the response from the LLM before passing it to `setState`. There are multiple libraries that can do this for you, such as [ajv](https://ajv.js.org/).
{% /note %}

## Getting Started

To get started you'll need:

- API Key for your chosen LLM
- Existing AG Grid implementation
- Input for user queries 
- Knowledge of making requests to an LLM

### Creating a Schema 

The `getStructuredSchema` API returns a structured [JSON Schema](https://json-schema.org/) representation of the [Grid State](./grid-state/) that can then be used to create a LLM compatible schema:

```ts
// Generated Structured Schema Representation of Grid State
const gridStateStructuredSchema = gridApi.getStructuredSchema();

// Create LLM compatible JSON Schema, using Grid State Structured Schema
const schema = {
    type: 'object',
    properties: {
        gridState: gridStateStructuredSchema,
        propertiesToIgnore: {
            type: 'array',
            items: {
                type: 'string',
                enum: ['aggregation', 'filter', 'sort', 'pivot', 'columnVisibility', 'columnSizing', 'rowGroup'],
            },
            description: 'List of grid state properties to ignore when applying the new state',
        },
        explanation: {
            type: 'string',
            description: 'Human-readable explanation of the changes made to the grid state',
        },
    },
    required: ['gridState', 'propertiesToIgnore', 'explanation'],
    additionalProperties: false,
};
```

The `getStructuredSchema()` API returns a narrow representation of what can be achieved using the grid's API. For example, if a column is not sortable, the schema will not include that column in the list of sortable columns. This ensures that the LLM is only able to generate valid state changes for the grid.

The `schema` contains several properties for the LLM to populate:

-   `gridState` a structured output representation of grid state.
-   `propertiesToIgnore` a list of grid state properties which have been unchanged, to ensure they are not overridden when updating grid state (optional, but recommended).
-   `explanation` a string the LLM can use to provide human-readable context to the user about the changes that have been applied (optional, but recommended).

### Modifying State with Any LLM

To provide the LLM with sufficient context, we recommend sending the users' request, the current grid state, and the schema to the LLM:

```ts
// Get Users' Request, e.g. from Input Element
const userRequest = inputElement.value.trim();

// Get Current Grid State
const gridState = gridApi.getState();

// Send User Request & Schema to LLM
const response = await callLLM(userRequest, gridState, schema)
```

The `callLLM` function needs to be implemented in accordance with your chosen LLM. Refer to our example above for a [reference implementation](https://github.com/ag-grid/ag-grid/blob/9399cd046793a702ef0b0924f0203a5cc27b96c6/documentation/ag-grid-docs/src/content/docs/ai-toolkit/_examples/natural-language-grid-state/chatgptApi.ts#L10) using [ChatGPT's Completions API](https://platform.openai.com/docs/api-reference/chat?api-mode=chat) and the [Prompting](#prompting) section for more information on creating system prompts.

### Updating Grid State

Once the LLM has provided a response, it should be validated against your top-level schema. This is particularly important when using an LLM that does not support structured outputs. 

In this example, we're using `ajv` to validate the LLMs response, before calling `setState` to update the grid:

```ts
// Extract Grid State & Properties to Ignore from LLM Response
const { newGridState, propertiesToIgnore } = response;

// Init ajv Validator with Schema
const ajv = new ajv7();
const ajvValidator = ajv.compile(schema);

// Validate LLM response w/ ajv
if (!ajvValidator(newGridState)) {
    console.error("Invalid Schema")
    return;
}

// Update Grid State with LLM Response
gridApi.setState(newGridState, propertiesToIgnore);
```

Passing `propertiesToIgnore` (the properties that are unchanged by the LLM) to `setState` ensures that these properties are not overridden after the grid state is updated. 

## Excluding Features

By default, all features are enabled and included in the schema. If you wish to limit the features returned in the schema, you can do so by providing a list of feature names to the `getStructuredSchema` method.

For example, if you do not want to allow users to manipulate column visibility and sorting, you can call `getStructuredSchema` like this:

```ts
const gridStateStructuredSchema = gridApi.getStructuredSchema(
    {
         exclude: ['columnVisibility', 'sorting'] 
    }
);
```

## Providing Additional Context

Occasionally the LLM will need more information than can be provided by the grid alone. As such, you can pass in options for each column defining extra context.

 - `description` - This provides a description of what the column contains to the LLM inline in the schema. This might include details such as the type or format of the data, to aid it when filtering or aggregating.
 - `includeSetValues` - When using the Set Filter, the LLM must be provided with the allowed values for it to correctly set those it wishes to filter. However, some Set Filters contain many values which lead to a schema which is too large for your LLM to process. By default we do not include the Set Filter values, however if you set this property to true then they will be included. Refer to the [Handling Schema Size Limits](#handling-schema-size-limits) for more information.

```ts
const gridStateStructuredSchema = gridApi.getStructuredSchema({ columns: {
    sport: {
        description: "The sport the athlete won their medal in",
        includeSetValues: true
    },
    gold: {
        description: "The number of gold medals won by this athlete at this games"
    }
}});
```

{% warning %}
Before including data like columns description or Set Filter values, you should be aware of the data security policy of your LLM provider.
{% /warning %}

## Prompting

The AI Toolkit does not include any prompting logic, as this will vary depending on the LLM you are using and your specific use case. This gives you the flexibility to craft prompts that are tailored to your users and the data in your grid.

In our testing we have found a few things that help get the best results from LLMs when prompting them to generate grid state changes:

- Include the current state of the grid in the prompt. This helps the LLM understand what the grid currently looks like, and what changes are being requested.
- Request the LLM to only return the grid state changes, and nothing else. This helps ensure that the response can be passed directly to `setState()`.
- You may wish to provide the LLM with a few rows of data from the grid, to help it understand the data it is working with. This is especially useful if your grid contains domain-specific data that the LLM may not be familiar with. If you have a small dataset, you can even include the entire dataset in the prompt by using `exportToCsv()`.
- If you have any domain specific knowledge or terminology that you want the LLM to be aware of, include that in the prompt as well. e.g. "In this dataset, a 'medal' refers to any of gold, silver or bronze medals won at the Olympic games".
- Including a list of available features also helps the LLM understand what it can and cannot do.

Below is an example prompt you can use as a starting point:

```ts
const prompt = `
    You are an expert data analyst working with a data grid. 
    The grid contains data about Olympic athletes and their achievements.

    You should respond to user requests by generating a JSON object that 
    represents their requested changes to the grid state. The response should 
    include all their requested changes, along with any features that are 
    already applied to the grid that they have not requested to change.

    The following is the current state of a data grid, represented as a 
    JSON object. The grid contains data about Olympic athletes and their achievements.

    Current Grid State:
    ${JSON.stringify(gridApi.getState(), null, 2)}

    The grid has the following features available to manipulate:
    - Column Visibility
    - Column Sizing
    - Row Grouping
    - Sorting
    - Aggregation
    - Pivoting
    - Filtering
`
```

## Modifying the Schema

You may modify the structured schema to surgically change options or add extra features. For example, you may wish to add schema for a custom filter, or limit which columns can be hidden by the LLM:

```ts
// Generate base schema
const baseSchema = gridApi.getStructuredSchema();

// Augment with custom constraints
function applyCustomRules(schema) {
  return {
    ...schema,
    // custom schema rules...
}

const customSchema = applyCustomRules(baseSchema);
```

If you choose to do this, make sure that the result is still a valid `GridState` object before passing it to `setState`. Be aware that the JSON schema supported by LLMs is a subset of the full JSON schema spec and build your schemas accordingly. 

## Handling Schema Size Limits

`includeSetValues: true` is useful when the LLM must pick from explicit allowed values (e.g., Set Filter), enabling precise filter construction. However, large cardinalities can inflate the prompt and exceed context limits.

**Recommendations**:

- Enable `includeSetValues` selectively on low-cardinality columns only.

- Consider truncating to the top N most frequent values plus an "OTHER" hint.

- Monitor total prompt size; keep within your model’s context window with a buffer for the model’s response.

## API

{% apiDocumentation source="grid-api/api.json" section="ai-toolkit" names=["getStructuredSchema"] /%}

