Improve model card: add correct pipeline tag, library_name, and relevant links
Browse filesThis PR improves the model card for the model presented in [Effective Training Data Synthesis for Improving MLLM Chart Understanding](https://huggingface.co/papers/2508.06492).
Specifically, it:
- Corrects the `pipeline_tag` to `image-text-to-text` for better discoverability on the Hugging Face Hub.
- Adds `library_name: transformers` as the model is compatible with the 🤩 Transformers library, making it easier for users to identify how to use it.
- Updates the introductory sentence to include a direct link to the Hugging Face paper page and the GitHub repository for easier access to the code.
Please review and merge this PR if everything looks good.
README.md
CHANGED
|
@@ -1,14 +1,17 @@
|
|
| 1 |
---
|
| 2 |
-
license: mit
|
| 3 |
-
metrics:
|
| 4 |
-
- accuracy
|
| 5 |
base_model:
|
| 6 |
- llava-hf/llama3-llava-next-8b-hf
|
| 7 |
- openbmb/MiniCPM-V-2_6
|
| 8 |
- microsoft/Phi-3-vision-128k-instruct
|
| 9 |
- Qwen/Qwen2.5-VL-7B-Instruct
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
---
|
| 11 |
-
|
|
|
|
| 12 |
|
| 13 |
**ECD Dataset Overview**:
|
| 14 |

|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
| 2 |
base_model:
|
| 3 |
- llava-hf/llama3-llava-next-8b-hf
|
| 4 |
- openbmb/MiniCPM-V-2_6
|
| 5 |
- microsoft/Phi-3-vision-128k-instruct
|
| 6 |
- Qwen/Qwen2.5-VL-7B-Instruct
|
| 7 |
+
license: mit
|
| 8 |
+
metrics:
|
| 9 |
+
- accuracy
|
| 10 |
+
pipeline_tag: image-text-to-text
|
| 11 |
+
library_name: transformers
|
| 12 |
---
|
| 13 |
+
|
| 14 |
+
**The following models are obtained via supervised fine-tuning (SFT) using the ECD-10k-Images dataset ([URL](https://huggingface.co/datasets/ChartFoundation/ECD-10k-Images)) proposed in our ICCV 2025 paper, "[Effective Training Data Synthesis for Improving MLLM Chart Understanding](https://huggingface.co/papers/2508.06492)" ([Code](https://github.com/yuweiyang-anu/ECD)).**
|
| 15 |
|
| 16 |
**ECD Dataset Overview**:
|
| 17 |

|