Update README.md
Browse files
README.md
CHANGED
|
@@ -74,6 +74,17 @@ The overall architecture is shown below:
|
|
| 74 |
|
| 75 |
---
|
| 76 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 77 |
## 🧩 Usage
|
| 78 |
|
| 79 |
To get started with **inference**, follow the setup in the main repository:
|
|
|
|
| 74 |
|
| 75 |
---
|
| 76 |
|
| 77 |
+
## 📊 Benchmark Results
|
| 78 |
+
|
| 79 |
+
| Benchmark | Task | Split | Metric | Viper-L1 (CoT) |
|
| 80 |
+
|-------------|------|-------|----------|----------------|
|
| 81 |
+
| RealWorldQA | VQA | Test | Accuracy | **33.73%** |
|
| 82 |
+
| Other results | VQA | Test | Accuracy | On going |
|
| 83 |
+
|
| 84 |
+
**Notes.** CoT = Chain-of-Thought prompting enabled during inference. Exact settings (temperature/top-p/max tokens) can influence results; see the inference snippet below to replicate typical generation settings.
|
| 85 |
+
|
| 86 |
+
---
|
| 87 |
+
|
| 88 |
## 🧩 Usage
|
| 89 |
|
| 90 |
To get started with **inference**, follow the setup in the main repository:
|