Update README.md
Browse files
README.md
CHANGED
@@ -4,25 +4,24 @@ tags:
|
|
4 |
- video-classification
|
5 |
library_name: keras
|
6 |
---
|
7 |
-
|
8 |
# Video Classification with a CNN-RNN Architecture
|
9 |
|
10 |
-
![Example video from dataset](./animation.gif)
|
11 |
-
```bash
|
12 |
-
Test video path: v_Punch_g03_c02.avi
|
13 |
-
Punch: 56.50%
|
14 |
-
TennisSwing: 29.97%
|
15 |
-
PlayingCello: 6.47%
|
16 |
-
ShavingBeard: 3.69%
|
17 |
-
CricketShot: 3.38%
|
18 |
-
```
|
19 |
-
|
20 |
**Author:** Sayak Paul
|
21 |
**Date created:** 2021/05/28
|
22 |
**Last modified:** 2021/06/05
|
23 |
**Description:** Training a video classifier with transfer learning and a recurrent model on the UCF101 dataset.
|
24 |
-
**Keras documentation [link](https://keras.io/examples/vision/video_classification/)**
|
25 |
|
26 |
This example demonstrates video classification, an important use-case with applications in recommendations, security, and so on. We will be using the UCF101 dataset to build our video classifier. The dataset consists of videos categorized into different actions, like cricket shot, punching, biking, etc. This dataset is commonly used to build action recognizers, which are an application of video classification.
|
27 |
|
28 |
-
A video consists of an ordered sequence of frames. Each frame contains spatial information, and the sequence of those frames contains temporal information. To model both of these aspects, we use a hybrid architecture that consists of convolutions (for spatial processing) as well as recurrent layers (for temporal processing). Specifically, we'll use a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) consisting of GRU layers. This kind of hybrid architecture is popularly known as a CNN-RNN.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
- video-classification
|
5 |
library_name: keras
|
6 |
---
|
|
|
7 |
# Video Classification with a CNN-RNN Architecture
|
8 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
**Author:** Sayak Paul
|
10 |
**Date created:** 2021/05/28
|
11 |
**Last modified:** 2021/06/05
|
12 |
**Description:** Training a video classifier with transfer learning and a recurrent model on the UCF101 dataset.
|
13 |
+
**Keras documentation [link](https://keras.io/examples/vision/video_classification/)**
|
14 |
|
15 |
This example demonstrates video classification, an important use-case with applications in recommendations, security, and so on. We will be using the UCF101 dataset to build our video classifier. The dataset consists of videos categorized into different actions, like cricket shot, punching, biking, etc. This dataset is commonly used to build action recognizers, which are an application of video classification.
|
16 |
|
17 |
+
A video consists of an ordered sequence of frames. Each frame contains spatial information, and the sequence of those frames contains temporal information. To model both of these aspects, we use a hybrid architecture that consists of convolutions (for spatial processing) as well as recurrent layers (for temporal processing). Specifically, we'll use a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) consisting of GRU layers. This kind of hybrid architecture is popularly known as a CNN-RNN.
|
18 |
+
|
19 |
+
```bash
|
20 |
+
Test video path: v_Punch_g03_c02.avi
|
21 |
+
Punch: 56.50%
|
22 |
+
TennisSwing: 29.97%
|
23 |
+
PlayingCello: 6.47%
|
24 |
+
ShavingBeard: 3.69%
|
25 |
+
CricketShot: 3.38%
|
26 |
+
```
|
27 |
+
![Example video from dataset](./animation.gif)
|