---
sidebar_position: 1
slug: /
title: Introduction
hide_title: true
---

import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";

Logflare is a log ingestion and querying engine that allows the storage and querying of log events in a columnar database (BigQuery).

<ThemedImage
  alt="Logflare Dashboard"
  sources={{
    light: "/img/dashboard.png",
    dark: "/img/dashboard-dark.png",
  }}
  style={{ maxHeight: 800, maxWidth: "100%" }}
/>

Logflare has recently been [acquired by Supabase](https://supabase.com/blog/supabase-acquires-logflare), however the service still operates and powers the [Supabase Platform's](https://supabase.com/) logging and observability features.

## Features and Motivations

### Scalable Storage and Querying Costs

Columnar databases allows for fast analysis while providing compact storage mechanisms, allowing you to store orders of magnitude more event data at the same cost that our competitors offer. Furthermore, the costs scale predictably according to the amount of data stored, allowing you to have a peace of mind when managing billing and infrastructure costs.

Lucene-based log management systems worked well before the advent of scalable database options, but starts to get prohibitively expensive beyond a certain scale and volume, and the data would need to be shipped elsewhere to be further analyzed over the long term.

BigQuery in particular was build in-house by Google to store and analyze petabytes of event data, and Logflare leverages it to allow you to store many magnitudes more data.

### Bring Your Own Backend

Logflare can integrate with your very own backends, allowing Logflare to only manage the pipeline and throughput of log events. This ensures maximum flexibility for storing sensitive data and managing your own storage and querying costs.

Bringing your own backend gives you complete control over your storage and querying costs.

### Schema Management

When events are ingested, the backend schema is automatically managed by Logflare, allowing you to insert JSON payloads without having to worry about data type changes.

When new fields are sent to Logflare, the data type is detected automatically and merged into the current table schema.

### Querying UI and API

Logflare provides a fully featured [querying interface](./concepts/ingestion#querying). [Logflare Endpoints](/concepts/endpoints.md) provides a programmatic interface for executing SQL queries on your stored log events, allowing you to analyze and leverage your event data in downstream applications.

---

## Getting Started

1. **Create an Account**

   Head over to the [Logflare site](https://logflare.app/) and [create an account](https://logflare.app/auth/login).

2. **Create a Source**

   Once you're logged in, you can create a **New Source**.<br />
   Retrieve your source ID and API Key by clicking on the setup button.

   See [Ingestion and Sources](/concepts/ingestion#creating-a-source) for detailed instructions on creating and configuring sources.

3. **Send a Log**

   Once your source is created, execute this cURL command to send a log event to Logflare.

   Replace `YOUR-SOURCE-ID-HERE` and `YOUR-API-KEY-HERE` placeholders with the values from step 2.

   ```bash
   curl -X "POST" "https://api.logflare.app/logs/json?source=YOUR-SOURCE-ID-HERE" \
           -H 'Content-Type: application/json; charset=utf-8' \
           -H 'X-API-KEY: YOUR-API-KEY-HERE' \
           -d $'[{
           "message": "This is the main event message",
           "metadata": {"some": "log event"}
       }]'
   ```

4. **Check the Source**

   Congratulations, your first log event should be successfully POST-ed to Logflare! You can then search and filter the source for specific events using the [Logflare Query Language](/concepts/lql).
