---
title: "Hono on Vercel: A Performance Deep Dive into Fluid Compute"
description: ""
author: "Thibault Le Ouay Ducasse"
publishedAt: "2025-08-27"
image: "/assets/posts/hono-vercel-fluid-compute/hono.png"
category: "engineering"
---



This article details how to deploy a new Hono server on Vercel and monitor it using OpenStatus, with a focus on observing the impact of Vercel's Fluid Compute.
We'll compare the performance of a "warm" server, which is regularly pinged, against a "cold" server that remains idle.


## Our Setup

#### Our Setup

First, we set up our Hono server using Vercel's [zero-configuration deployment](https://hono.dev/docs/getting-started/vercel):

1. We created a new Hono project: `pnpm create hono@latest`.`
2. We navigated into the new directory: `cd new-directory`
3. We followed Vercel's zero-configuration deployment instructions for Hono backends.
4. We deployed the application using `vc deploy`.

We repeated this process to create two identical servers. One server is designated as "warm," receiving a request every minute to prevent it from going idle. The other is "cold," and we only send a request to it once per hour to observe the impact of cold starts. Both servers were hosted in the `IAD1` region.


Next, we configured monitoring with openstatus. We created a new monitor with the following YAML configuration.

This is the configuration for the "cold" server:

```yaml
# yaml-language-server: $schema=https://www.openstatus.dev/schema.json

"hono-cold":
  active: true
  assertions:
  - compare: eq
    kind: statusCode
    target: 200
  description: Monitoring Hono App on Vercel
  frequency: 1h
  kind: http
  name: Hono Vercel Cold
  public: true
  regions:
  - arn
  - ams
  - atl
  - bog
  - bom
  - bos
  - cdg
  - den
  - dfw
  - ewr
  - eze
  - fra
  - gdl
  - gig
  - gru
  - hkg
  - iad
  - jnb
  - lax
  - lhr
  - mad
  - mia
  - nrt
  - ord
  - otp
  - phx
  - scl
  - sea
  - sin
  - sjc
  - syd
  - yul
  - yyz
  request:
    headers:
      User-Agent: OpenStatus
    method: GET
    url: https://hono-cold.vercel.app/
  retry: 3
```
We have deployed it using the [openstatus cli](https://docs.openstatus.dev/tutorial/get-started-with-openstatus-cli/).

```bash
openstatus monitors apply
```


### Our metrics

These are our metrics for both cold and warm deployments from the last 24 hours.

#### Warm

<Grid cols={5}>
<div>

**100**% UPTIME

</div>
<div>

**0**# FAILING

</div>
<div>

**47,520**# PINGS

</div>
<div className="hidden md:block"></div>
<div className="hidden md:block"></div>
<div>

**171**ms P50

</div>
<div>

**275**ms P75

</div>
<div>

**343**ms P90

</div>
<div>

**417**ms P95

</div>
<div>

**524**ms P99

</div>
</Grid>

<div className="mt-4">
  <SimpleChart
    staticFile="/assets/posts/hono-vercel-fluid-compute/hono-warm.json"
    caption="hono warm p50 latency between 18. Augh and 25. Aug 2025 aggregated in a 1h window."
  />
</div>

#### Cold

<Grid cols={5}>
<div>

**100**% UPTIME

</div>
<div>

**0**# FAILING

</div>
<div>

**792**# PINGS

</div>
<div className="hidden md:block"></div>
<div className="hidden md:block"></div>
<div>

**212**ms P50

</div>
<div>

**333**ms P75

</div>
<div>

**439**ms P90

</div>
<div>

**529**ms P95

</div>
<div>

**639**ms P99

</div>
</Grid>


<div className="mt-4">
  <SimpleChart
    staticFile="/assets/posts/hono-vercel-fluid-compute/hono-cold.json"
    caption="Vercel edge p50 latency between 18. Augh and 25. Aug 2025 aggregated in a 1h window."
  />
</div>


### Analysis and Discussion

When we compare our results, the warm server's performance is significantly faster, as expected. Its p99 latency is 524ms, while the cold server's p99 latency is 639ms. This 115ms difference highlights the overhead of a cold start. However, when we compare this to a similar test we ran with the previous Node.js runtime, the performance is notably better.

> Read our blog post: [Monitoring latency: Vercel Serverless Function vs Vercel Edge Function](/blog/monitoring-latency-vercel-edge-vs-serverless)

### The Good

- **Excellent Developer Experience (DX):** Deploying a Hono server on Vercel is incredibly simple, requiring just a couple of commands. The zero-configuration setup is a major plus for developers.
- **Performance Improvements:** Fluid Compute provides a tangible improvement over the previous Vercel Node.js runtime. It reduces the impact of cold starts and makes the serverless experience more efficient.


### The Bad

- **Deprecation of Edge Functions:** Vercel has deprecated its dedicated Edge Functions in favor of a unified Vercel Functions infrastructure that uses Fluid Compute. While this unifies the platform, it might force a transition for existing projects.
- **Cost Considerations:** While Fluid Compute aims for efficiency, the "warm" server, which is active for roughly 8 minutes and 20 seconds per day, translates to over 400 minutes of usage per month. This could exceed the free tier's limits, depending on the specific CPU time and memory usage, requiring a paid plan.
- **Complexity:** The new pricing model, which combines active CPU time, provisioned memory, and invocations, can be more complex to track and predict than the simpler invocation-based pricing of the past.



### Conclusion

In conclusion, deploying a Hono server on Vercel offers excellent developer experience. However, the deprecation of Edge Functions and the complexity of the new pricing model are potential drawbacks.



_Create an account on [OpenStatus](/app/sign-up) to
monitor your API and get notified when your latency increases._
