Update README.md
Browse files
README.md
CHANGED
@@ -2,14 +2,6 @@
|
|
2 |
license: mit
|
3 |
---
|
4 |
|
5 |
-
Following the approaches we mention in our paper, and provided code in LLaVA(https://llava-vl.github.io), we finetuned LLaVA on the contextual emotion recognition task using:
|
6 |
-
1) EMOTIC train set (https://s3.sunai.uoc.edu/emotic/index.html) with precision of 54.27
|
7 |
-
2) EMOTIC validation set plus augmentation with F1 score of 36.83
|
8 |
-
|
9 |
-
We provided the LoRA weights for these methods, and the base model is llava-v1.5-13b (https://huggingface.co/liuhaotian/llava-v1.5-13b/tree/main)
|
10 |
-
The input should be a) and image with a bounding box of the target individual b) text prompt which is: From suffering, pain, aversion, disapproval, anger, fear, annoyance, fatigue, disquietment, doubt/confusion, embarrassment, disconnection, affection, confidence, engagement, happiness, peace, pleasure, esteem, excitement, anticipation, yearning, sensitivity, surprise, sadness, and sympathy, pick the top labels that the person in the red bounding box is feeling at the same time.
|
11 |
-
The output is the emotion labels that the target is feeling.
|
12 |
-
|
13 |
|
14 |
# Contextual Emotion Recognition with LLaVA
|
15 |
|
@@ -43,10 +35,5 @@ To perform contextual emotion recognition using our fine-tuned model, follow the
|
|
43 |
|
44 |
3. Receive the output which includes the emotion labels that the target individual is feeling.
|
45 |
|
46 |
-
## License
|
47 |
-
|
48 |
-
This project is licensed under the [MIT License](LICENSE).
|
49 |
-
|
50 |
-
|
51 |
|
52 |
|
|
|
2 |
license: mit
|
3 |
---
|
4 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
|
6 |
# Contextual Emotion Recognition with LLaVA
|
7 |
|
|
|
35 |
|
36 |
3. Receive the output which includes the emotion labels that the target individual is feeling.
|
37 |
|
|
|
|
|
|
|
|
|
|
|
38 |
|
39 |
|