---
title: Quickstart
---

## Overview

Regular Postgres tables, known as heap tables, organize data by row. While this makes sense for operational data,
it is inefficient for analytical queries, which often scan a large amount of data from a subset of the columns
in a table.

ParadeDB introduces special tables called `parquet` tables. These tables behave like regular Postgres tables but
have two primary advantages:

1. Significantly faster aggregate queries due to a column-oriented layout and query engine
2. Lower disk space due to Parquet as the storage format

## How to Use Parquet Tables

```sql
-- USING parquet must be provided
CREATE TABLE movies (name text, rating int) USING parquet;

-- Insert and query data
INSERT INTO movies VALUES ('Star Wars', 9), ('Indiana Jones', 8);
SELECT AVG(rating) FROM movies;

-- Clear the table
TRUNCATE movies;
```

That's it! `parquet` tables accept standard Postgres queries, so there's nothing new to learn.

## When to Use Parquet Tables

Because column-oriented storage formats are not designed for fast updates, `parquet` tables are for **append-only data**.
`UPDATE` and `DELETE` clauses are not supported. Data that is frequently updated
should be stored in regular Postgres `heap` tables.

## Copying into a Parquet Table

This example demonstrates how to copy data from an existing heap table into a `parquet` table.

```sql
-- Create regular table
CREATE TABLE heap (a int, b int);
INSERT INTO heap VALUES (1, 2);

-- Create parquet table with the same schema
CREATE TABLE events (a int, b int) USING parquet;

-- Copy data into parquet table
INSERT INTO events SELECT * FROM heap;
```

## Known Limitations

`parquet` tables are currently in beta. The following is a list of known limitations. Many of these
will become resolved as `parquet` tables become production-ready.

- [ ] Some Postgres types, notably `json` and `timestamptz`
- [ ] User-defined functions, aggregations, or types
- [ ] JOINing `parquet` and regular Postgres `heap` tables
- [ ] Write-ahead-log (WAL) support/`ROLLBACK`/logical replication
- [ ] Collations
- [ ] Partitioning by specific columns
- [ ] `INSERT ... ON CONFLICT` clauses
- [ ] Index creation
- [ ] External object store integrations (S3/Azure/GCS/HDFS)
- [ ] External Apache Iceberg and Delta Lake support
- [ ] Full text search over `parquet` tables with `pg_bm25`
