fix: typo in description
Browse files- index.html +3 -3
index.html
CHANGED
|
@@ -10,11 +10,11 @@
|
|
| 10 |
<main class="max-w-6xl mx-auto p-6">
|
| 11 |
<h1 class="text-3xl font-bold mb-6 text-center">Browser LLM Evaluation</h1>
|
| 12 |
<p class="mb-6 text-gray-700 text-center">
|
| 13 |
-
This project explores how in-browser LLM inference
|
| 14 |
latency.
|
| 15 |
-
The goal is to
|
| 16 |
models.
|
| 17 |
-
For the prompts and evaluation of the accuracy the BooIQ dataset is used.
|
| 18 |
The project is currently under development and does not aim to provide accurate LLM responses, but rather to
|
| 19 |
measure performance differences.
|
| 20 |
To run the cloud based inference, you need to bring your own OpenRouter API key.
|
|
|
|
| 10 |
<main class="max-w-6xl mx-auto p-6">
|
| 11 |
<h1 class="text-3xl font-bold mb-6 text-center">Browser LLM Evaluation</h1>
|
| 12 |
<p class="mb-6 text-gray-700 text-center">
|
| 13 |
+
This project explores how in-browser LLM inference behaves compared to cloud-based inference in terms of
|
| 14 |
latency.
|
| 15 |
+
The goal is to model different request incoming patterns and routing strategies between cloud and on-device
|
| 16 |
models.
|
| 17 |
+
For the prompts and evaluation of the accuracy the <a href="https://huggingface.co/datasets/google/boolq" target="_blank">BooIQ</a> dataset is used.
|
| 18 |
The project is currently under development and does not aim to provide accurate LLM responses, but rather to
|
| 19 |
measure performance differences.
|
| 20 |
To run the cloud based inference, you need to bring your own OpenRouter API key.
|