---
import Section from "./Section.astro";
---

<Section>
  <p>
    Human conversations are fast, typically <a
      class="underline hover:text-orange-600"
      href="https://www.semanticscholar.org/paper/Turn-taking-in-Human-Communication-%E2%80%93-Origins-and-Levinson/aa8b57bcfac5577b26d2b3a422a2378a7496f257/figure/0"
      >around 200ms</a
    > between turns, and we think LLMs should be just as quick. This site provides
    reliable measurements for the performance of popular models.
  </p>
  <p class="mt-4">
    You can filter models using the text fields in the header, e.g.,
    <a class="underline hover:text-orange-600" href="?mf=llama-3.1-405b"
      >Llama 3.1 405B providers</a
    >,
    <a
      class="underline hover:text-orange-600"
      href="?mf=gpt-4-turbo|gpt-4o|claude-3|gemini">GPT-4 vs Claude 3 vs Gemini</a
    >.
  </p>
  <p class="mt-4">
    <a class="underline hover:text-orange-600" href="#definitions"
      >Definitions</a
    >, <a class="underline hover:text-orange-600" href="#methodology"
      >methodology</a
    >, and links to <a class="underline hover:text-orange-600" href="#source"
      >source</a
    > below. Stats updated daily.
  </p>
  <p class="mt-4">
    Have another model you want us to benchmark? File an <a
      class="underline hover:text-orange-600"
      href="https://github.com/fixie-ai/fastest.ai/issues"
      target="_blank">issue on GitHub</a
    >.
  </p>
</Section>
