deedax commited on
Commit
64b033f
1 Parent(s): 61f8c46

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +70 -1
app.py CHANGED
@@ -61,7 +61,76 @@ st.markdown(
61
  )
62
 
63
  if app_mode == 'About the App':
64
- st.markdown('Will edit this later!!')
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65
 
66
  elif app_mode == 'Run Mood Scope':
67
 
 
61
  )
62
 
63
  if app_mode == 'About the App':
64
+ st.markdown("""
65
+ # Mood Scope
66
+ Mood Scope detects emotions of people in an image using deep learning
67
+
68
+ # Installation
69
+ - Clone this repo ` git clone https://github.com/Daheer/mood-scope.git `
70
+ - Install requirements ` pip insatll requirements.txt `
71
+ - Launch streamlit app ` streamlit run mood_scope.py `
72
+
73
+ # Usage
74
+
75
+ The 'Run Mood Scope' section of the app lets you upload any image. After doing so, it analyzes and detects the mood of the person in the picture.
76
+
77
+ The app displays the detected dominant emotion with a suitable emoji.
78
+
79
+ It also displays the distribution of the moods using a spider chart. The higher the point, the stronger the presence of that emotion in the image.
80
+
81
+ ### Emotion-emoji guide
82
+
83
+ | Emotion | Emoji |
84
+ |------------|------------|
85
+ | Angry | 😡 |
86
+ | Disgusted | 🤢 |
87
+ | Fearful | 😨 |
88
+ | Happy | 😃 |
89
+ | Neutral | 😐 |
90
+ | Sad | ☹️ |
91
+ | Surprised | 😮 |
92
+
93
+
94
+
95
+ The app is available and can be accessed via two platforms
96
+ - [`Hugging Face Spaces`](https://huggingface.co/spaces/deedax/mood-scope)
97
+ - [`Render`](https://mood-scope.onrender.com/)
98
+
99
+ # Features
100
+
101
+ - Image upload
102
+ - Emotion detection
103
+ - Spider chart display
104
+ - Emotion intensity analysis
105
+
106
+ # Built Using
107
+ - [Python](https://python.org)
108
+ - [PyTorch](https://pytorch.org)
109
+ - [OpenAI CLIP](https://openai.com/research/clip)
110
+ - [Streamlit](https://streamlit.io/)
111
+
112
+ # Details
113
+
114
+ Face facts achieves zero-shot image classification using CLIP. CLIP can be a powerful tool for image classification because it allows you to leverage both visual and language information to classify images. This even means no dataset was used for any training or finetuning.
115
+
116
+ First, the emotions (angry, fearful, sad, neutral etc.) were organized using a template to create natural language descriptions for the images. Each emotion was transformed into a template phrase "a photo of a {emotion} person," where {emotion} is one of the emotions in the list. The text descriptions were then tokenized to generate text embeddings that can be processed by the CLIP model.
117
+
118
+ The image was preprocessed using the CLIPProcessor, which includes resizing and normalization. This prepares the image for feature extraction. The CLIP model then computes features for the image to generate image embeddings that capture the visual features of the image.
119
+
120
+ To calculate the similarity between each description and the image, a dot product is performed between the image embeddings and text embeddings. This results in a score that indicates how similar the description is to the image. The score is then used to classify the image into one of the emotion categories.
121
+
122
+ # Contact
123
+
124
+ Dahir Ibrahim (Deedax Inc)
125
+
126
+ Email - dahiru.ibrahim@outlook.com
127
+
128
+ Twitter - https://twitter.com/DeedaxInc
129
+
130
+ YouTube - https://www.youtube.com/@deedaxinc
131
+
132
+ Project Link - https://github.com/Daheer/mask-check
133
+ """)
134
 
135
  elif app_mode == 'Run Mood Scope':
136