Amitz244 commited on
Commit
149ee8b
·
verified ·
1 Parent(s): 78afd29

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -12
README.md CHANGED
@@ -13,15 +13,7 @@ tags:
13
  ---
14
  # Don’t Judge Before You CLIP: Visual Emotion Analysis Model
15
 
16
- This model is part of our paper:
17
- *"Don’t Judge Before You CLIP: A Unified Approach for Perceptual Tasks"*
18
- It was trained on the *EmoSet dataset* to predict emotion class.
19
-
20
- ## Model Overview
21
-
22
- Visual perceptual tasks, such as visual emotion analysis, aim to estimate how humans perceive and interpret images. Unlike objective tasks (e.g., object recognition), these tasks rely on subjective human judgment, making labeled data scarce.
23
-
24
- Our approach leverages *CLIP* as a prior for perceptual tasks, inspired by cognitive research showing that CLIP correlates well with human judgment. This suggests that CLIP implicitly captures human biases, emotions, and preferences. We fine-tune CLIP minimally using LoRA and incorporate an MLP head to adapt it to each specific task.
25
 
26
  ## Training Details
27
 
@@ -32,9 +24,6 @@ Our approach leverages *CLIP* as a prior for perceptual tasks, inspired by cogni
32
  - *Learning Rate*: 0.0001
33
  - *Batch Size*: 32
34
 
35
- ## Performance
36
-
37
- The model was trained and evaluated on the EmoSet dataset, following the standard dataset splits. Our method achieves state-of-the-art performance compared to existing approaches, as described in our paper.
38
  ## Usage
39
 
40
  To use the model for inference:
 
13
  ---
14
  # Don’t Judge Before You CLIP: Visual Emotion Analysis Model
15
 
16
+ PreceptCLIP-Emotions is a model designed to predict the emotions that an image evokes in users. This is the official model from the paper ["Don't Judge Before You CLIP: A Unified Approach for Perceptual Tasks"](https://arxiv.org/abs/2503.13260). Our model applies LoRA adaptation on the CLIP visual encoder with an additional MLP head to achieve state-of-the-art results.
 
 
 
 
 
 
 
 
17
 
18
  ## Training Details
19
 
 
24
  - *Learning Rate*: 0.0001
25
  - *Batch Size*: 32
26
 
 
 
 
27
  ## Usage
28
 
29
  To use the model for inference: