FBAGSTM commited on
Commit
bb39929
·
verified ·
1 Parent(s): 9c4488f

Release AI-ModelZoo-4.0.0

Browse files
Files changed (1) hide show
  1. README.md +22 -16
README.md CHANGED
@@ -1,10 +1,16 @@
1
- # IGN HAR model
 
 
 
 
 
 
2
 
3
  ## **Use case** : `Human activity recognition`
4
 
5
  # Model description
6
 
7
- IGN is acronym of Ignatov, and is a convolutional neural network (CNN) based model for performing the human activity recognition (HAR) task based on the 3D accelerometer data. In this work we use a modified version of the IGN model presented in the [paper[2]](#2). It uses the 3D raw data with gravity rotation and supression filter as preprocessing. This is a light model with very small foot prints in terms of FLASH and RAM as well as computational requirements.
8
 
9
  This network supports any input size greater than (20 x 3 x 1) but we recommend to use at least (24 x 3 x 1), i.e. a window length of 24 samples. In this folder we provide IGN models trained with two different window lenghts [24 and 48].
10
 
@@ -48,16 +54,16 @@ For an input resolution of wl x 3 x 1 and P classes
48
 
49
  ## Metrics
50
 
51
- Measures are done with [STM32Cube.AI Dev Cloud version](https://stm32ai-cs.st.com/home) 10.0.0 with enabled input/output allocated options and balanced optimization. The inference time is reported is calculated using **STM32Cube.AI version 10.0.0**, on STM32 board **B-U585I-IOT02A** running at Frequency of **160 MHz**.
52
 
53
 
54
  Reference memory footprint and inference times for IGN models are given in the table below. The accuracies are provided in the sections after for two datasets.
55
 
56
 
57
- | Model | Format | Input Shape | Series | Activation RAM (KiB) | Runtime RAM (KiB) | Weights Flash (KiB) | Code Flash (KiB) | Total RAM (KiB)| Total Flash (KiB) | Inference Time (msec) | STM32Cube.AI version |
58
- |:-----------------------------------------------------------------------------:|:---------:|:-----------:|:-------:|:--------------------:|:-----------------:|:-------------------:|:----------------:|:--------------:|:-----------------:|:---------------------:|:---------------------:|
59
- | [IGN wl 24](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/human_activity_recognition/ign/ST_pretrainedmodel_public_dataset/WISDM/ign_wl_24/ign_wl_24.h5) | FLOAT32 | 24 x 3 x 1 | STM32U5 | 2.03 | 1.91 | 11.97 | 13.61 | 3.94 | 25.58 | 2.25 | 10.0.0 |
60
- | [IGN wl 48](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/human_activity_recognition/ign/ST_pretrainedmodel_public_dataset/WISDM/ign_wl_48/ign_wl_48.h5) | FLOAT32 | 48 x 3 x 1 | STM32U5 | 4.56 | 1.91 | 38.97 | 13.61 | 6.47 | 52.58 | 8.17 | 10.0.0 |
61
 
62
 
63
 
@@ -68,14 +74,14 @@ Reference memory footprint and inference times for IGN models are given in the t
68
  Dataset details: A custom dataset and not publically available, Number of classes: 5 [Stationary, Walking, Jogging, Biking, Vehicle]. **(We kept only 4, [Stationary, Walking, Jogging, Biking]) and removed Driving**, Number of input frames: 81,151 (for wl = 24), and 40,575 for (wl = 48).
69
 
70
 
71
- | Model | Format | Resolution | Accuracy (%)|
72
- |:--------------------------------------------------------------------------------------------:|:------:|:----------:|:-----------:|
73
- | [IGN wl 24](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/human_activity_recognition/ign/ST_pretrainedmodel_public_dataset/mobility_v1/ign_wl_24/ign_wl_24.h5) | FLOAT32| 24 x 3 x 1 | 94.64 |
74
- | [IGN wl 48](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/human_activity_recognition/ign/ST_pretrainedmodel_public_dataset/mobility_v1/ign_wl_48/ign_wl_48.h5) | FLOAT32| 48 x 3 x 1 | 95.01 |
75
 
76
  Confusion matrix for IGN wl 24 with Float32 weights for mobility_v1 dataset is given below.
77
 
78
- ![plot](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/human_activity_recognition/ign/doc/img/mobility_v1_ign_wl_24_confusion_matrix.png)
79
 
80
 
81
  ### Accuracy with WISDM dataset
@@ -83,10 +89,10 @@ Confusion matrix for IGN wl 24 with Float32 weights for mobility_v1 dataset is g
83
 
84
  Dataset details: [link](([WISDM]("https://www.cis.fordham.edu/wisdm/dataset.php"))) , License [CC BY 2.0](https://creativecommons.org/licenses/by/2.0/) , Quotation[[1]](#1) , Number of classes: 4 (we are combining [Upstairs and Downstairs into Stairs] and [Standing and Sitting into Stationary]), Number of samples: 45,579 (at wl = 24), and 22,880 (at wl = 48).
85
 
86
- | Model | Format | Resolution | Accuracy (%) |
87
- |:-------------------------------------------------------------------------------------:|:-------:|:----------:|:-------------:|
88
- | [IGN wl 24]((https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/human_activity_recognition/ign/ST_pretrainedmodel_public_datase/WISDM/ign_wl_24/ign_wl_24.h5) | FLOAT32 | 24 x 3 x 1 | 91.7 |
89
- | [IGN wl 48]((https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/human_activity_recognition/ign/ST_pretrainedmodel_public_datase/WISDM/ign_wl_48/ign_wl_48.h5) | FLOAT32 | 48 x 3 x 1 | 93.67 |
90
 
91
 
92
  ## Retraining and Integration in a simple example:
 
1
+ ---
2
+ license: other
3
+ license_name: sla0044
4
+ license_link: >-
5
+ https://github.com/STMicroelectronics/stm32ai-modelzoo/blob/main/human_activity_recognition/st_ign/ST_pretrainedmodel_custom_dataset/LICENSE.md
6
+ ---
7
+ # ST_IGN HAR model
8
 
9
  ## **Use case** : `Human activity recognition`
10
 
11
  # Model description
12
 
13
+ IGN is acronym of Ignatov, and is a convolutional neural network (CNN) based model for performing the human activity recognition (HAR) task based on the 3D accelerometer data. In this work we use a modified version of the IGN model presented in the [paper[2]](#2). The prefix `st_` denotes it is a variation of the model built by STMicroelectronics. It uses the 3D raw data with gravity rotation and supression filter as preprocessing. This is a light model with very small foot prints in terms of FLASH and RAM as well as computational requirements.
14
 
15
  This network supports any input size greater than (20 x 3 x 1) but we recommend to use at least (24 x 3 x 1), i.e. a window length of 24 samples. In this folder we provide IGN models trained with two different window lenghts [24 and 48].
16
 
 
54
 
55
  ## Metrics
56
 
57
+ Measures are done with [STEdge AI Dev Cloud version](https://stm32ai-cs.st.com/home) 3.0.0 with enabled input/output allocated options and balanced optimization. The inference time is reported is calculated using **STEdge AI version 3.0.0**, on STM32 board **B-U585I-IOT02A** running at Frequency of **160 MHz**.
58
 
59
 
60
  Reference memory footprint and inference times for IGN models are given in the table below. The accuracies are provided in the sections after for two datasets.
61
 
62
 
63
+ | Model | Format | Input Shape | Series | Activation RAM (KiB) | Runtime RAM (KiB) | Weights Flash (KiB) | Code Flash (KiB) | Total RAM (KiB)| Total Flash (KiB) | Inference Time (msec) | STEdge AI Core version |
64
+ |:-----------------------------------------------------------------------------------------:|:---------:|:-----------:|:-------:|:--------------------:|:-----------------:|:-------------------:|:----------------:|:--------------:|:-----------------:|:---------------------:|:---------------------:|
65
+ | [st_ign_wl_24](./https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/human_activity_recognition/st_ign/ST_pretrainedmodel_public_dataset/WISDM/st_ign_wl_24/st_ign_wl_24.keras) | FLOAT32 | 24 x 3 x 1 | STM32U5 | 2.88 | 0.28 | 11.97 | 6.15 | 3.16 | 18.12 | 1.99 | 3.0.0 |
66
+ | [st_ign_wl_48](./https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/human_activity_recognition/st_ign/ST_pretrainedmodel_public_dataset/WISDM/st_ign_wl_48/st_ign_wl_48.keras) | FLOAT32 | 48 x 3 x 1 | STM32U5 | 9.91 | 0.28 | 38.97 | 6.16 | 10.19 | 45.13 | 7.23 | 3.0.0 |
67
 
68
 
69
 
 
74
  Dataset details: A custom dataset and not publically available, Number of classes: 5 [Stationary, Walking, Jogging, Biking, Vehicle]. **(We kept only 4, [Stationary, Walking, Jogging, Biking]) and removed Driving**, Number of input frames: 81,151 (for wl = 24), and 40,575 for (wl = 48).
75
 
76
 
77
+ | Model | Format | Resolution | Accuracy (%)|
78
+ |:------------------------------------------------------------------------------------------------:|:------:|:----------:|:-----------:|
79
+ | [st_ign_wl_24](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/human_activity_recognition/st_ign/ST_pretrainedmodel_custom_dataset/mobility_v1/st_ign_wl_24/st_ign_wl_24.keras) | FLOAT32| 24 x 3 x 1 | 95.04 |
80
+ | [st_ign_wl_48](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/human_activity_recognition/st_ign/ST_pretrainedmodel_custom_dataset/mobility_v1/st_ign_wl_48/st_ign_wl_48.keras) | FLOAT32| 48 x 3 x 1 | 94.29 |
81
 
82
  Confusion matrix for IGN wl 24 with Float32 weights for mobility_v1 dataset is given below.
83
 
84
+ ![plot](./doc/img/mobility_v1_st_ign_wl_24_confusion_matrix.png)
85
 
86
 
87
  ### Accuracy with WISDM dataset
 
89
 
90
  Dataset details: [link](([WISDM]("https://www.cis.fordham.edu/wisdm/dataset.php"))) , License [CC BY 2.0](https://creativecommons.org/licenses/by/2.0/) , Quotation[[1]](#1) , Number of classes: 4 (we are combining [Upstairs and Downstairs into Stairs] and [Standing and Sitting into Stationary]), Number of samples: 45,579 (at wl = 24), and 22,880 (at wl = 48).
91
 
92
+ | Model | Format | Resolution | Accuracy (%) |
93
+ |:----------------------------------------------------------------------------------------:|:-------:|:----------:|:-------------:|
94
+ | [st_ign_wl_24](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/human_activity_recognition/st_ign/ST_pretrainedmodel_public_dataset/WISDM/st_ign_wl_24/st_ign_wl_24.keras) | FLOAT32 | 24 x 3 x 1 | 91.78 |
95
+ | [st_ign_wl_48](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/human_activity_recognition/st_ign/ST_pretrainedmodel_public_dataset/WISDM/st_ign_wl_48/st_ign_wl_48.keras) | FLOAT32 | 48 x 3 x 1 | 94.09 |
96
 
97
 
98
  ## Retraining and Integration in a simple example: