Add pipeline tag and library name, improve description

#1
by nielsr HF staff - opened
Files changed (1) hide show
  1. README.md +23 -6
README.md CHANGED
@@ -1,11 +1,21 @@
1
  ---
2
  license: mit
 
 
3
  ---
4
- <h2>[Installation Free!] Quicker Start with Hugging Face AutoModel</h2>
5
 
6
- No need to install this GitHub repo. Ensure that you use the Transformers package of 4.45.0 (`pip install transformers==4.45.0`).
 
 
 
 
 
 
 
 
 
 
7
 
8
- Do the image quality interpreting chat with q-sit.
9
  ```python
10
  import requests
11
  from PIL import Image
@@ -45,7 +55,8 @@ print(processor.decode(output[0][2:], skip_special_tokens=True).split("assistant
45
  # very low
46
  ```
47
 
48
- Do the image quality scoring with q-sit.
 
49
  ```python
50
  import torch
51
  import requests
@@ -77,7 +88,9 @@ conversation = [
77
  {
78
  "role": "user",
79
  "content": [
80
- {"type": "text", "text": "Assume you are an image quality evaluator. \nYour rating should be chosen from the following five categories: Excellent, Good, Fair, Poor, and Bad (from high to low). \nHow would you rate the quality of this image?"},
 
 
81
  {"type": "image"},
82
  ],
83
  },
@@ -111,4 +124,8 @@ print("Weighted average score:", weighted_score)
111
  # if you want range from 0-5, multiply 5
112
  ```
113
 
114
- To test q-sit on datasets, please refer to evaluation scripts [here](https://github.com/Q-Future/Q-SiT/tree/main/eval_scripts).
 
 
 
 
 
1
  ---
2
  license: mit
3
+ pipeline_tag: image-to-text
4
+ library_name: transformers
5
  ---
 
6
 
7
+ # Q-SiT: Image Quality Scoring and Interpreting with Large Language Models
8
+
9
+ Q-SiT is a model for image quality scoring and interpretation. It uses a Large Language Model to perform both tasks simultaneously, recognizing the inherent connection between perception and decision-making in the human visual system. Unlike previous approaches which treat scoring and interpreting as separate tasks, Q-SiT provides a unified framework.
10
+
11
+ Project page: https://github.com/Q-Future/Q-SiT
12
+
13
+ ## Quicker Start with Hugging Face AutoModel
14
+
15
+ No need to install this GitHub repo. Ensure that you use the Transformers package version 4.45.0 (`pip install transformers==4.45.0`).
16
+
17
+ ### Image Quality Interpreting Chat
18
 
 
19
  ```python
20
  import requests
21
  from PIL import Image
 
55
  # very low
56
  ```
57
 
58
+ ### Image Quality Scoring
59
+
60
  ```python
61
  import torch
62
  import requests
 
88
  {
89
  "role": "user",
90
  "content": [
91
+ {"type": "text", "text": "Assume you are an image quality evaluator.
92
+ Your rating should be chosen from the following five categories: Excellent, Good, Fair, Poor, and Bad (from high to low).
93
+ How would you rate the quality of this image?"},
94
  {"type": "image"},
95
  ],
96
  },
 
124
  # if you want range from 0-5, multiply 5
125
  ```
126
 
127
+ For dataset evaluation scripts, please refer to [this directory](https://github.com/Q-Future/Q-SiT/tree/main/eval_scripts). For training information, see the [Training Q-SiT](https://github.com/Q-Future/Q-SiT#training-q-sit) section of the GitHub repository.
128
+
129
+ ## Citation
130
+
131
+ To do