Update README.md
Browse files
README.md
CHANGED
|
@@ -22,7 +22,7 @@ While Vision Language Models (VLMs) show advancing reasoning capabilities, their
|
|
| 22 |
To address these challenges, we construct WeatherQA, a multimodal multiple-choice benchmark for meteorology comprising 15,400 entries that cover four themes and seven imaging modality tasks. We propose Logically Consistent Reinforcement Fine-Tuning (LoCo-RFT), which introduces a logical consistency reward to resolve Self-Contra. Based on this paradigm and WeatherQA, we present Weather-R1, the first reasoning VLM with logical faithfulness in meteorology, to the best of our knowledge. Weather-R1 (7B) achieves 52.9% accuracy on WeatherQA, a 9.8 percentage point gain over the baseline model Qwen2.5-VL-7B; it surpasses Supervised Fine-Tuning and RFT baselines, exceeds the original Qwen2.5-VL-32B, and improves out-of-domain ScienceQA performance by 4.98 percentage points.
|
| 23 |
|
| 24 |
<div align="center\">
|
| 25 |
-
<img src="
|
| 26 |
<p><em>Response Comparison.</em></p>
|
| 27 |
</div>
|
| 28 |
|
|
|
|
| 22 |
To address these challenges, we construct WeatherQA, a multimodal multiple-choice benchmark for meteorology comprising 15,400 entries that cover four themes and seven imaging modality tasks. We propose Logically Consistent Reinforcement Fine-Tuning (LoCo-RFT), which introduces a logical consistency reward to resolve Self-Contra. Based on this paradigm and WeatherQA, we present Weather-R1, the first reasoning VLM with logical faithfulness in meteorology, to the best of our knowledge. Weather-R1 (7B) achieves 52.9% accuracy on WeatherQA, a 9.8 percentage point gain over the baseline model Qwen2.5-VL-7B; it surpasses Supervised Fine-Tuning and RFT baselines, exceeds the original Qwen2.5-VL-32B, and improves out-of-domain ScienceQA performance by 4.98 percentage points.
|
| 23 |
|
| 24 |
<div align="center\">
|
| 25 |
+
<img src="asserts/Case_Study.png" width="70%" />
|
| 26 |
<p><em>Response Comparison.</em></p>
|
| 27 |
</div>
|
| 28 |
|