Spaces:
Build error
Build error
taquynhnga
commited on
Commit
•
7c20c15
1
Parent(s):
6ad0915
Update Home.py
Browse files
Home.py
CHANGED
@@ -1,13 +1,33 @@
|
|
1 |
-
import os
|
2 |
-
# os.environ['KMP_DUPLICATE_LIB_OK'] = 'True'
|
3 |
-
|
4 |
import streamlit as st
|
5 |
-
from transformers import AutoFeatureExtractor, AutoModelForImageClassification
|
6 |
-
import torch
|
7 |
|
8 |
st.set_page_config(layout='wide')
|
|
|
9 |
st.title('About')
|
10 |
|
11 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
|
|
|
|
|
|
|
|
|
|
|
13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
import streamlit as st
|
|
|
|
|
2 |
|
3 |
st.set_page_config(layout='wide')
|
4 |
+
|
5 |
st.title('About')
|
6 |
|
7 |
+
# INTRO
|
8 |
+
intro_text = """Convolutional neural networks (ConvNets) have evolved at a rapid speed from the 2010s.
|
9 |
+
Some of the representative ConvNets models are VGGNet, Inceptions, ResNe(X)t, DenseNet, MobileNet, EfficientNet and RegNet, which focus on various factors of accuracy, efficiency, and scalability.
|
10 |
+
In the year 2020, Vision Transformers (ViT) was introduced as a Transformer model solving the computer vision problems.
|
11 |
+
Larger model and dataset sizes allow ViT to perform significantly better than ResNet, however, ViT still encountered challenges in generic computer vision tasks such as object detection and semantic segmentation.
|
12 |
+
Swin Transformer’ s success made Transformers be adopted as a generic vision backbone and showed outstanding performance in a wide range of computer vision tasks.
|
13 |
+
Nevertheless, rather than the intrinsic inductive biases of convolutions, the success of this approach is still primarily attributed to Transformers’ inherent superiority.
|
14 |
+
In 2022, Zhuang Liu et. al. proposed a pure convolutional model dubbed ConvNeXt, discovered from the modernization of a standard ResNet towards the design of Vision Transformers and claimed to outperform them.
|
15 |
+
|
16 |
+
The project aims to interpret the ConvNeXt model by several visualization techniques.
|
17 |
+
After that, a web interface would be built to demonstrate the interpretations, helping us look inside the deep ConvNeXt model and answer the questions of “what patterns maximally activated this filter (channel) in this layer?”, “which features are responsible for the current prediction?”.
|
18 |
+
Due to the limitation in time and resources, the project only used the tiny-sized ConvNeXt model, which was trained on ImageNet-1k at resolution 224x224 and used 50,000 images in validation set of ImageNet-1k for demo purpose.
|
19 |
|
20 |
+
In this web app, two visualization techniques were implemented and demonstrated, they are **Maximally activating patches** and **SmoothGrad**.
|
21 |
+
Besides, this web app also helps in investigate the effect on **adversarial attacks** on ConvNeXt interpretations.
|
22 |
+
Last but not least, there is a last webpage that stores 50,000 images in the **ImageNet-1k** validation set, facilitating the two web pages above in searching and referencing.
|
23 |
+
"""
|
24 |
+
st.write(intro_text)
|
25 |
|
26 |
+
# 4 PAGES
|
27 |
+
sections_text = """In overall, there are 4 functionalities in this webs app:
|
28 |
+
1) Maximally activating patches: The visualization method in this page answers the question “what patterns maximally activated this filter (channel)?”.
|
29 |
+
2) SmoothGrad: This visualization method in this page answers the question “which features are responsible for the current prediction?”.
|
30 |
+
3) Adversarial attack: How adversarial attacks affect ConvNeXt interpretation?
|
31 |
+
4) ImageNet1k: The storage of 50,000 images in validation set.
|
32 |
+
"""
|
33 |
+
st.write(sections_text)
|