JackIsNotInTheBox commited on
Commit
e81cc3b
·
1 Parent(s): 1d5a74f

Upload 21 files

Browse files
README.md CHANGED
@@ -1,14 +1,132 @@
1
- ---
2
- title: Taro
3
- emoji: 👁
4
- colorFrom: green
5
- colorTo: green
6
- sdk: gradio
7
- sdk_version: 6.9.0
8
- python_version: '3.12'
9
- app_file: app.py
10
- pinned: false
11
- short_description: 'TARO: Video-to-Audio Synthesis (ICCV 2025)'
12
- ---
13
-
14
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # [ICCV'25] TARO: Timestep-Adaptive Representation Alignment with Onset-Aware Conditioning for Synchronized Video-to-Audio Synthesis
2
+ <br>
3
+
4
+ **[Tri Ton](https://triton99.github.io/)<sup>1</sup>, [Ji Woo Hong](https://jiwoohong93.github.io/)<sup>1</sup>, [Chang D. Yoo](https://sanctusfactory.com/family.php)<sup>1†</sup>**
5
+ <br>
6
+ <sup>1</sup>KAIST, South Korea
7
+ <br>
8
+ †Corresponding authors
9
+
10
+ <p align="center">
11
+ <a href="https://triton99.github.io/taro-site/" target='_blank'>
12
+ <img src="https://img.shields.io/badge/🐳-Project%20Page-blue">
13
+ </a>
14
+ <a href="https://arxiv.org/abs/2504.05684" target='_blank'>
15
+ <img src="https://img.shields.io/badge/arXiv-2312.13528-b31b1b.svg">
16
+ </a>
17
+ <img alt="GitHub Repo stars" src="https://img.shields.io/github/stars/triton99/TARO">
18
+ </p>
19
+
20
+ ## 📣 News
21
+ - **[09/2025]**: Training & Inference code released.
22
+ - **[06/2025]**: TARO accepted to ICCV 2025 🎉.
23
+ - **[04/2024]**: Paper uploaded to arXiv. Check out the manuscript [here](https://arxiv.org/abs/2504.05684).(https://arxiv.org/abs/2504.05684).
24
+
25
+ ## To-Dos
26
+ - [x] Release model weights on Google Drive.
27
+ - [x] Release inference code
28
+ - [x] Release training code & dataset preparation
29
+
30
+ ## ⚙️ Environmental Setups
31
+ 1. Clone TARO.
32
+ ```bash
33
+ git clone https://github.com/triton99/TARO
34
+ cd TARO
35
+ ```
36
+
37
+ 2. Create the environment.
38
+ ```bash
39
+ conda create -n taro python==3.10
40
+ conda activate taro
41
+ pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121
42
+
43
+ # Training
44
+ pip install --force pip==24.0
45
+ git clone https://github.com/pytorch/fairseq
46
+ cd fairseq
47
+ pip install --editable ./ --no-build-isolation
48
+ cd ..
49
+
50
+ git clone https://github.com/cwx-worst-one/EAT.git
51
+
52
+ # Inference
53
+ pip3 install -r requirements.txt
54
+ ```
55
+
56
+ ## 📁 Data Preparations
57
+ Please download the [VGGSound dataset](https://www.robots.ox.ac.uk/~vgg/data/vggsound/), extract the videos, and organize them into two folders: one with .mp4 files and one with corresponding .wav files (matching base filenames).
58
+
59
+ Update the path variables at the top of the preprocessing scripts to point to your folders, then run:
60
+ ```bash
61
+ ./preprocess_video.sh
62
+
63
+ ./preprocess_audio.sh
64
+ ```
65
+
66
+ After processing, the data will have the following structure:
67
+ ```bash
68
+ VGGSound/train
69
+ ├── videos
70
+ │ ├── abc.mp4
71
+ │ └── ...
72
+ ├── audios
73
+ │ ├── abc.wav
74
+ │ └── ...
75
+ ├── cavp_feats
76
+ │ ├── abc.npz
77
+ │ └── ...
78
+ ├── onset_feats
79
+ │ ├── abc.npz
80
+ │ └── ...
81
+ ├── melspec
82
+ │ ├── abc.npy
83
+ │ └── ...
84
+ └── fbank
85
+ │ ├── abc.npy
86
+ │ └── ...
87
+ ```
88
+
89
+
90
+ ## 🚀 Getting Started
91
+
92
+ ### Download Checkpoints
93
+
94
+ The pretrained TARO checkpoint can be downloaded on [Google Drive](https://drive.google.com/drive/folders/1YqLsEtVYeSchhAh-wKS-BWuB6MK6_mJB?usp=sharing).
95
+
96
+ The CAVP checkpoint can be downloaded from [Diff-Foley](https://github.com/luosiallen/Diff-Foley).
97
+
98
+ The onset checkpoint can be downloaded from [SyncFusion](https://github.com/mcomunita/syncfusion).
99
+
100
+ ### Training
101
+ ```bash
102
+ ./train.sh
103
+ ```
104
+
105
+ ### Inference
106
+ To run the inference code, you can use the following command:
107
+ ```bash
108
+ python infer.py \
109
+ --video_path ./test.mp4 \
110
+ --save_folder_path ./output \
111
+ --cavp_config_path ./cavp/model/cavp.yaml \
112
+ --cavp_ckpt_path ./cavp_epoch66.ckpt \
113
+ --onset_ckpt_path ./onset_model.ckpt \
114
+ --model_ckpt_path ./taro_ckpt.pt
115
+ ```
116
+
117
+ ## 📖 Citing TARO
118
+
119
+ If you find our repository useful, please consider giving it a star ⭐ and citing our paper in your work:
120
+
121
+ ```bibtex
122
+ @inproceedings{ton2025taro,
123
+ title = {TARO: Timestep-Adaptive Representation Alignment with Onset-Aware Conditioning for Synchronized Video-to-Audio Synthesis},
124
+ author = {Ton, Tri and Hong, Ji Woo and Yoo, Chang D},
125
+ year = {2025},
126
+ booktitle = {International Conference on Computer Vision (ICCV)},
127
+ }
128
+ ```
129
+
130
+ ## 🤗 Acknowledgements
131
+
132
+ Our code is based on [REPA](https://github.com/sihyun-yu/REPA), [Diff-Foley](https://github.com/luosiallen/Diff-Foley), and [SyncFusion](https://github.com/mcomunita/syncfusion). We thank the authors for their excellent work!
cavp/.DS_Store ADDED
Binary file (6.15 kB). View file
 
cavp/cavp.yaml ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+
2
+ model:
3
+ target: cavp.model.cavp_model.CAVP_Inference
4
+ params:
5
+ video_encode: Slowonly_pool
6
+ spec_encode: cnn14_pool
7
+ embed_dim: 512
8
+ video_pretrained: True
9
+ audio_pretrained: True
cavp/model/cavp_model.py ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ from .cavp_modules import ResNet3dSlowOnly, Cnn14
4
+ import torch.nn as nn
5
+ import torch
6
+ import numpy as np
7
+ import torch.nn.functional as F
8
+
9
+ class CAVP_Inference(nn.Module):
10
+
11
+ def __init__(
12
+ self,
13
+ video_encode,
14
+ spec_encode,
15
+ embed_dim: int,
16
+ video_pretrained: bool = False,
17
+ audio_pretrained: bool = False,
18
+ ):
19
+ super().__init__()
20
+
21
+ self.video_encode = video_encode
22
+ self.spec_encode = spec_encode
23
+
24
+
25
+ # 1). Video Encoder:
26
+ assert self.video_encode == "Slowonly_pool"
27
+ self.video_encoder = ResNet3dSlowOnly(depth=50, pretrained=None) # Doesn't matter to set pretrained=None, since we will load CAVP weight outside.
28
+
29
+ # Video Project & Pooling Head:
30
+ self.video_project_head = nn.Linear(2048, embed_dim)
31
+ self.video_pool = nn.MaxPool1d(kernel_size=16)
32
+
33
+
34
+ # 2). Spec Encoder:
35
+ assert self.spec_encode == "cnn14_pool" # Pretrained
36
+ self.spec_encoder = Cnn14(embed_dim=512)
37
+
38
+ # Spec Project & Pooling Head:
39
+ self.spec_project_head = nn.Identity()
40
+ self.spec_pool = nn.MaxPool1d(kernel_size=16)
41
+
42
+ # 3). Logit Scale:
43
+ self.logit_scale = nn.Parameter(torch.ones([]) * np.log(1 / 0.07))
44
+
45
+
46
+
47
+ def encode_video(self, video, normalize: bool = False, train=False, pool=True):
48
+
49
+ # Video: B x T x 3 x H x W
50
+ assert self.video_encode == "Slowonly_pool"
51
+ video = video.permute(0, 2, 1, 3, 4)
52
+ video_feat = self.video_encoder(video)
53
+ bs, c, t, _, _ = video_feat.shape
54
+ video_feat = video_feat.reshape(bs, c, t).permute(0, 2, 1)
55
+ video_feat = self.video_project_head(video_feat)
56
+
57
+ # Pooling:
58
+ if pool:
59
+ video_feat = self.video_pool(video_feat.permute(0,2,1)).squeeze(2)
60
+
61
+ # Normalize:
62
+ if normalize:
63
+ video_feat = F.normalize(video_feat, dim=-1)
64
+
65
+ return video_feat
66
+
67
+
68
+ def encode_spec(self, spec, normalize: bool = False, pool=True):
69
+ # spec: B x Mel_num x T
70
+ assert self.spec_encode == "cnn14_pool"
71
+ spec = spec.unsqueeze(1) # B x 1 x Mel x T
72
+ spec = spec.permute(0, 1, 3, 2) # B x 1 x T x Mel
73
+ spec_feat = self.spec_encoder(spec) # B x T x C
74
+ spec_feat = self.spec_project_head(spec_feat)
75
+
76
+ # Pooling:
77
+ if pool:
78
+ spec_feat = self.spec_pool(spec_feat.permute(0, 2, 1)).squeeze(2)
79
+
80
+ # Normalize:
81
+ if normalize:
82
+ spec_feat = F.normalize(spec_feat, dim=-1)
83
+
84
+ return spec_feat
85
+
86
+
87
+ def forward(self, video, spec, output_dict=True):
88
+ video_features = self.encode_video(video, normalize=True)
89
+ spec_features = self.encode_spec(spec, normalize=True)
90
+ if output_dict:
91
+ return {
92
+ "video_features": video_features,
93
+ "spec_features": spec_features,
94
+ "logit_scale": self.logit_scale.exp()
95
+ }
96
+ return video_features, spec_features, self.logit_scale.exp()
cavp/model/cavp_modules.py ADDED
@@ -0,0 +1,1545 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import warnings
2
+
3
+ import torch
4
+ import torch.nn as nn
5
+ from mmcv.cnn import ConvModule, kaiming_init
6
+ from mmcv.runner import _load_checkpoint, load_checkpoint
7
+ from mmcv.utils import print_log
8
+
9
+ import warnings
10
+ import torch.nn as nn
11
+ import torch.utils.checkpoint as cp
12
+ from mmcv.cnn import (ConvModule, NonLocal3d, build_activation_layer,
13
+ constant_init, kaiming_init)
14
+ from mmcv.runner import _load_checkpoint, load_checkpoint
15
+ from mmcv.utils import _BatchNorm
16
+ from torch.nn.modules.utils import _ntuple, _triple
17
+
18
+ from itertools import repeat
19
+ import collections.abc
20
+ from typing import Callable, Optional, Sequence, Tuple
21
+ from torch.utils.checkpoint import checkpoint
22
+
23
+ from torch.nn import functional as F
24
+ from collections import OrderedDict
25
+
26
+ class BasicBlock3d(nn.Module):
27
+ """BasicBlock 3d block for ResNet3D.
28
+ Args:
29
+ inplanes (int): Number of channels for the input in first conv3d layer.
30
+ planes (int): Number of channels produced by some norm/conv3d layers.
31
+ spatial_stride (int): Spatial stride in the conv3d layer. Default: 1.
32
+ temporal_stride (int): Temporal stride in the conv3d layer. Default: 1.
33
+ dilation (int): Spacing between kernel elements. Default: 1.
34
+ downsample (nn.Module | None): Downsample layer. Default: None.
35
+ style (str): ``pytorch`` or ``caffe``. If set to "pytorch", the
36
+ stride-two layer is the 3x3 conv layer, otherwise the stride-two
37
+ layer is the first 1x1 conv layer. Default: 'pytorch'.
38
+ inflate (bool): Whether to inflate kernel. Default: True.
39
+ non_local (bool): Determine whether to apply non-local module in this
40
+ block. Default: False.
41
+ non_local_cfg (dict): Config for non-local module. Default: ``dict()``.
42
+ conv_cfg (dict): Config dict for convolution layer.
43
+ Default: ``dict(type='Conv3d')``.
44
+ norm_cfg (dict): Config for norm layers. required keys are ``type``,
45
+ Default: ``dict(type='BN3d')``.
46
+ act_cfg (dict): Config dict for activation layer.
47
+ Default: ``dict(type='ReLU')``.
48
+ with_cp (bool): Use checkpoint or not. Using checkpoint will save some
49
+ memory while slowing down the training speed. Default: False.
50
+ """
51
+ expansion = 1
52
+
53
+ def __init__(self,
54
+ inplanes,
55
+ planes,
56
+ spatial_stride=1,
57
+ temporal_stride=1,
58
+ dilation=1,
59
+ downsample=None,
60
+ style='pytorch',
61
+ inflate=True,
62
+ non_local=False,
63
+ non_local_cfg=dict(),
64
+ conv_cfg=dict(type='Conv3d'),
65
+ norm_cfg=dict(type='BN3d'),
66
+ act_cfg=dict(type='ReLU'),
67
+ with_cp=False,
68
+ **kwargs):
69
+ super().__init__()
70
+ assert style in ['pytorch', 'caffe']
71
+ # make sure that only ``inflate_style`` is passed into kwargs
72
+ assert set(kwargs).issubset(['inflate_style'])
73
+
74
+ self.inplanes = inplanes
75
+ self.planes = planes
76
+ self.spatial_stride = spatial_stride
77
+ self.temporal_stride = temporal_stride
78
+ self.dilation = dilation
79
+ self.style = style
80
+ self.inflate = inflate
81
+ self.conv_cfg = conv_cfg
82
+ self.norm_cfg = norm_cfg
83
+ self.act_cfg = act_cfg
84
+ self.with_cp = with_cp
85
+ self.non_local = non_local
86
+ self.non_local_cfg = non_local_cfg
87
+
88
+ self.conv1_stride_s = spatial_stride
89
+ self.conv2_stride_s = 1
90
+ self.conv1_stride_t = temporal_stride
91
+ self.conv2_stride_t = 1
92
+
93
+ if self.inflate:
94
+ conv1_kernel_size = (3, 3, 3)
95
+ conv1_padding = (1, dilation, dilation)
96
+ conv2_kernel_size = (3, 3, 3)
97
+ conv2_padding = (1, 1, 1)
98
+ else:
99
+ conv1_kernel_size = (1, 3, 3)
100
+ conv1_padding = (0, dilation, dilation)
101
+ conv2_kernel_size = (1, 3, 3)
102
+ conv2_padding = (0, 1, 1)
103
+
104
+ self.conv1 = ConvModule(
105
+ inplanes,
106
+ planes,
107
+ conv1_kernel_size,
108
+ stride=(self.conv1_stride_t, self.conv1_stride_s,
109
+ self.conv1_stride_s),
110
+ padding=conv1_padding,
111
+ dilation=(1, dilation, dilation),
112
+ bias=False,
113
+ conv_cfg=self.conv_cfg,
114
+ norm_cfg=self.norm_cfg,
115
+ act_cfg=self.act_cfg)
116
+
117
+ self.conv2 = ConvModule(
118
+ planes,
119
+ planes * self.expansion,
120
+ conv2_kernel_size,
121
+ stride=(self.conv2_stride_t, self.conv2_stride_s,
122
+ self.conv2_stride_s),
123
+ padding=conv2_padding,
124
+ bias=False,
125
+ conv_cfg=self.conv_cfg,
126
+ norm_cfg=self.norm_cfg,
127
+ act_cfg=None)
128
+
129
+ self.downsample = downsample
130
+ self.relu = build_activation_layer(self.act_cfg)
131
+
132
+ if self.non_local:
133
+ self.non_local_block = NonLocal3d(self.conv2.norm.num_features,
134
+ **self.non_local_cfg)
135
+
136
+ def forward(self, x):
137
+ """Defines the computation performed at every call."""
138
+
139
+ def _inner_forward(x):
140
+ """Forward wrapper for utilizing checkpoint."""
141
+ identity = x
142
+
143
+ out = self.conv1(x)
144
+ out = self.conv2(out)
145
+
146
+ if self.downsample is not None:
147
+ identity = self.downsample(x)
148
+
149
+ out = out + identity
150
+ return out
151
+
152
+ if self.with_cp and x.requires_grad:
153
+ out = cp.checkpoint(_inner_forward, x)
154
+ else:
155
+ out = _inner_forward(x)
156
+ out = self.relu(out)
157
+
158
+ if self.non_local:
159
+ out = self.non_local_block(out)
160
+
161
+ return out
162
+
163
+
164
+ class Bottleneck3d(nn.Module):
165
+ """Bottleneck 3d block for ResNet3D.
166
+ Args:
167
+ inplanes (int): Number of channels for the input in first conv3d layer.
168
+ planes (int): Number of channels produced by some norm/conv3d layers.
169
+ spatial_stride (int): Spatial stride in the conv3d layer. Default: 1.
170
+ temporal_stride (int): Temporal stride in the conv3d layer. Default: 1.
171
+ dilation (int): Spacing between kernel elements. Default: 1.
172
+ downsample (nn.Module | None): Downsample layer. Default: None.
173
+ style (str): ``pytorch`` or ``caffe``. If set to "pytorch", the
174
+ stride-two layer is the 3x3 conv layer, otherwise the stride-two
175
+ layer is the first 1x1 conv layer. Default: 'pytorch'.
176
+ inflate (bool): Whether to inflate kernel. Default: True.
177
+ inflate_style (str): ``3x1x1`` or ``3x3x3``. which determines the
178
+ kernel sizes and padding strides for conv1 and conv2 in each block.
179
+ Default: '3x1x1'.
180
+ non_local (bool): Determine whether to apply non-local module in this
181
+ block. Default: False.
182
+ non_local_cfg (dict): Config for non-local module. Default: ``dict()``.
183
+ conv_cfg (dict): Config dict for convolution layer.
184
+ Default: ``dict(type='Conv3d')``.
185
+ norm_cfg (dict): Config for norm layers. required keys are ``type``,
186
+ Default: ``dict(type='BN3d')``.
187
+ act_cfg (dict): Config dict for activation layer.
188
+ Default: ``dict(type='ReLU')``.
189
+ with_cp (bool): Use checkpoint or not. Using checkpoint will save some
190
+ memory while slowing down the training speed. Default: False.
191
+ """
192
+ expansion = 4
193
+
194
+ def __init__(self,
195
+ inplanes,
196
+ planes,
197
+ spatial_stride=1,
198
+ temporal_stride=1,
199
+ dilation=1,
200
+ downsample=None,
201
+ style='pytorch',
202
+ inflate=True,
203
+ inflate_style='3x1x1',
204
+ non_local=False,
205
+ non_local_cfg=dict(),
206
+ conv_cfg=dict(type='Conv3d'),
207
+ norm_cfg=dict(type='BN3d'),
208
+ act_cfg=dict(type='ReLU'),
209
+ with_cp=False):
210
+ super().__init__()
211
+ assert style in ['pytorch', 'caffe']
212
+ assert inflate_style in ['3x1x1', '3x3x3']
213
+
214
+ self.inplanes = inplanes
215
+ self.planes = planes
216
+ self.spatial_stride = spatial_stride
217
+ self.temporal_stride = temporal_stride
218
+ self.dilation = dilation
219
+ self.style = style
220
+ self.inflate = inflate
221
+ self.inflate_style = inflate_style
222
+ self.norm_cfg = norm_cfg
223
+ self.conv_cfg = conv_cfg
224
+ self.act_cfg = act_cfg
225
+ self.with_cp = with_cp
226
+ self.non_local = non_local
227
+ self.non_local_cfg = non_local_cfg
228
+
229
+ if self.style == 'pytorch':
230
+ self.conv1_stride_s = 1
231
+ self.conv2_stride_s = spatial_stride
232
+ self.conv1_stride_t = 1
233
+ self.conv2_stride_t = temporal_stride
234
+ else:
235
+ self.conv1_stride_s = spatial_stride
236
+ self.conv2_stride_s = 1
237
+ self.conv1_stride_t = temporal_stride
238
+ self.conv2_stride_t = 1
239
+
240
+ if self.inflate:
241
+ if inflate_style == '3x1x1':
242
+ conv1_kernel_size = (3, 1, 1)
243
+ conv1_padding = (1, 0, 0)
244
+ conv2_kernel_size = (1, 3, 3)
245
+ conv2_padding = (0, dilation, dilation)
246
+ else:
247
+ conv1_kernel_size = (1, 1, 1)
248
+ conv1_padding = (0, 0, 0)
249
+ conv2_kernel_size = (3, 3, 3)
250
+ conv2_padding = (1, dilation, dilation)
251
+ else:
252
+ conv1_kernel_size = (1, 1, 1)
253
+ conv1_padding = (0, 0, 0)
254
+ conv2_kernel_size = (1, 3, 3)
255
+ conv2_padding = (0, dilation, dilation)
256
+
257
+ self.conv1 = ConvModule(
258
+ inplanes,
259
+ planes,
260
+ conv1_kernel_size,
261
+ stride=(self.conv1_stride_t, self.conv1_stride_s,
262
+ self.conv1_stride_s),
263
+ padding=conv1_padding,
264
+ bias=False,
265
+ conv_cfg=self.conv_cfg,
266
+ norm_cfg=self.norm_cfg,
267
+ act_cfg=self.act_cfg)
268
+
269
+ self.conv2 = ConvModule(
270
+ planes,
271
+ planes,
272
+ conv2_kernel_size,
273
+ stride=(self.conv2_stride_t, self.conv2_stride_s,
274
+ self.conv2_stride_s),
275
+ padding=conv2_padding,
276
+ dilation=(1, dilation, dilation),
277
+ bias=False,
278
+ conv_cfg=self.conv_cfg,
279
+ norm_cfg=self.norm_cfg,
280
+ act_cfg=self.act_cfg)
281
+
282
+ self.conv3 = ConvModule(
283
+ planes,
284
+ planes * self.expansion,
285
+ 1,
286
+ bias=False,
287
+ conv_cfg=self.conv_cfg,
288
+ norm_cfg=self.norm_cfg,
289
+ # No activation in the third ConvModule for bottleneck
290
+ act_cfg=None)
291
+
292
+ self.downsample = downsample
293
+ self.relu = build_activation_layer(self.act_cfg)
294
+
295
+ if self.non_local:
296
+ self.non_local_block = NonLocal3d(self.conv3.norm.num_features,
297
+ **self.non_local_cfg)
298
+
299
+ def forward(self, x):
300
+ """Defines the computation performed at every call."""
301
+
302
+ def _inner_forward(x):
303
+ """Forward wrapper for utilizing checkpoint."""
304
+ identity = x
305
+
306
+ out = self.conv1(x)
307
+ out = self.conv2(out)
308
+ out = self.conv3(out)
309
+
310
+ if self.downsample is not None:
311
+ identity = self.downsample(x)
312
+
313
+ out = out + identity
314
+ return out
315
+
316
+ if self.with_cp and x.requires_grad:
317
+ out = cp.checkpoint(_inner_forward, x)
318
+ else:
319
+ out = _inner_forward(x)
320
+ out = self.relu(out)
321
+
322
+ if self.non_local:
323
+ out = self.non_local_block(out)
324
+
325
+ return out
326
+
327
+
328
+ class ResNet3d(nn.Module):
329
+ """ResNet 3d backbone.
330
+ Args:
331
+ depth (int): Depth of resnet, from {18, 34, 50, 101, 152}.
332
+ pretrained (str | None): Name of pretrained model.
333
+ stage_blocks (tuple | None): Set number of stages for each res layer.
334
+ Default: None.
335
+ pretrained2d (bool): Whether to load pretrained 2D model.
336
+ Default: True.
337
+ in_channels (int): Channel num of input features. Default: 3.
338
+ base_channels (int): Channel num of stem output features. Default: 64.
339
+ out_indices (Sequence[int]): Indices of output feature. Default: (3, ).
340
+ num_stages (int): Resnet stages. Default: 4.
341
+ spatial_strides (Sequence[int]):
342
+ Spatial strides of residual blocks of each stage.
343
+ Default: ``(1, 2, 2, 2)``.
344
+ temporal_strides (Sequence[int]):
345
+ Temporal strides of residual blocks of each stage.
346
+ Default: ``(1, 1, 1, 1)``.
347
+ dilations (Sequence[int]): Dilation of each stage.
348
+ Default: ``(1, 1, 1, 1)``.
349
+ conv1_kernel (Sequence[int]): Kernel size of the first conv layer.
350
+ Default: ``(3, 7, 7)``.
351
+ conv1_stride_s (int): Spatial stride of the first conv layer.
352
+ Default: 2.
353
+ conv1_stride_t (int): Temporal stride of the first conv layer.
354
+ Default: 1.
355
+ pool1_stride_s (int): Spatial stride of the first pooling layer.
356
+ Default: 2.
357
+ pool1_stride_t (int): Temporal stride of the first pooling layer.
358
+ Default: 1.
359
+ with_pool2 (bool): Whether to use pool2. Default: True.
360
+ style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two
361
+ layer is the 3x3 conv layer, otherwise the stride-two layer is
362
+ the first 1x1 conv layer. Default: 'pytorch'.
363
+ frozen_stages (int): Stages to be frozen (all param fixed). -1 means
364
+ not freezing any parameters. Default: -1.
365
+ inflate (Sequence[int]): Inflate Dims of each block.
366
+ Default: (1, 1, 1, 1).
367
+ inflate_style (str): ``3x1x1`` or ``3x3x3``. which determines the
368
+ kernel sizes and padding strides for conv1 and conv2 in each block.
369
+ Default: '3x1x1'.
370
+ conv_cfg (dict): Config for conv layers. required keys are ``type``
371
+ Default: ``dict(type='Conv3d')``.
372
+ norm_cfg (dict): Config for norm layers. required keys are ``type`` and
373
+ ``requires_grad``.
374
+ Default: ``dict(type='BN3d', requires_grad=True)``.
375
+ act_cfg (dict): Config dict for activation layer.
376
+ Default: ``dict(type='ReLU', inplace=True)``.
377
+ norm_eval (bool): Whether to set BN layers to eval mode, namely, freeze
378
+ running stats (mean and var). Default: False.
379
+ with_cp (bool): Use checkpoint or not. Using checkpoint will save some
380
+ memory while slowing down the training speed. Default: False.
381
+ non_local (Sequence[int]): Determine whether to apply non-local module
382
+ in the corresponding block of each stages. Default: (0, 0, 0, 0).
383
+ non_local_cfg (dict): Config for non-local module. Default: ``dict()``.
384
+ zero_init_residual (bool):
385
+ Whether to use zero initialization for residual block,
386
+ Default: True.
387
+ kwargs (dict, optional): Key arguments for "make_res_layer".
388
+ """
389
+
390
+ arch_settings = {
391
+ 18: (BasicBlock3d, (2, 2, 2, 2)),
392
+ 34: (BasicBlock3d, (3, 4, 6, 3)),
393
+ 50: (Bottleneck3d, (3, 4, 6, 3)),
394
+ 101: (Bottleneck3d, (3, 4, 23, 3)),
395
+ 152: (Bottleneck3d, (3, 8, 36, 3))
396
+ }
397
+
398
+ def __init__(self,
399
+ depth,
400
+ pretrained,
401
+ stage_blocks=None,
402
+ pretrained2d=True,
403
+ in_channels=3,
404
+ num_stages=4,
405
+ base_channels=64,
406
+ out_indices=(3, ),
407
+ spatial_strides=(1, 2, 2, 2),
408
+ temporal_strides=(1, 1, 1, 1),
409
+ dilations=(1, 1, 1, 1),
410
+ conv1_kernel=(3, 7, 7),
411
+ conv1_stride_s=2,
412
+ conv1_stride_t=1,
413
+ pool1_stride_s=2,
414
+ pool1_stride_t=1,
415
+ with_pool1=True,
416
+ with_pool2=True,
417
+ style='pytorch',
418
+ frozen_stages=-1,
419
+ inflate=(1, 1, 1, 1),
420
+ inflate_style='3x1x1',
421
+ conv_cfg=dict(type='Conv3d'),
422
+ norm_cfg=dict(type='BN3d', requires_grad=True),
423
+ act_cfg=dict(type='ReLU', inplace=True),
424
+ norm_eval=False,
425
+ with_cp=False,
426
+ non_local=(0, 0, 0, 0),
427
+ non_local_cfg=dict(),
428
+ zero_init_residual=True,
429
+ **kwargs):
430
+ super().__init__()
431
+ if depth not in self.arch_settings:
432
+ raise KeyError(f'invalid depth {depth} for resnet')
433
+ self.depth = depth
434
+ self.pretrained = pretrained
435
+ self.pretrained2d = pretrained2d
436
+ self.in_channels = in_channels
437
+ self.base_channels = base_channels
438
+ self.num_stages = num_stages
439
+ assert 1 <= num_stages <= 4
440
+ self.stage_blocks = stage_blocks
441
+ self.out_indices = out_indices
442
+ assert max(out_indices) < num_stages
443
+ self.spatial_strides = spatial_strides
444
+ self.temporal_strides = temporal_strides
445
+ self.dilations = dilations
446
+ assert len(spatial_strides) == len(temporal_strides) == len(
447
+ dilations) == num_stages
448
+ if self.stage_blocks is not None:
449
+ assert len(self.stage_blocks) == num_stages
450
+
451
+ self.conv1_kernel = conv1_kernel
452
+ self.conv1_stride_s = conv1_stride_s
453
+ self.conv1_stride_t = conv1_stride_t
454
+ self.pool1_stride_s = pool1_stride_s
455
+ self.pool1_stride_t = pool1_stride_t
456
+ self.with_pool1 = with_pool1
457
+ self.with_pool2 = with_pool2
458
+ self.style = style
459
+ self.frozen_stages = frozen_stages
460
+ self.stage_inflations = _ntuple(num_stages)(inflate)
461
+ self.non_local_stages = _ntuple(num_stages)(non_local)
462
+ self.inflate_style = inflate_style
463
+ self.conv_cfg = conv_cfg
464
+ self.norm_cfg = norm_cfg
465
+ self.act_cfg = act_cfg
466
+ self.norm_eval = norm_eval
467
+ self.with_cp = with_cp
468
+ self.zero_init_residual = zero_init_residual
469
+
470
+ self.block, stage_blocks = self.arch_settings[depth]
471
+
472
+ if self.stage_blocks is None:
473
+ self.stage_blocks = stage_blocks[:num_stages]
474
+
475
+ self.inplanes = self.base_channels
476
+
477
+ self.non_local_cfg = non_local_cfg
478
+
479
+ self._make_stem_layer()
480
+
481
+ self.res_layers = []
482
+ for i, num_blocks in enumerate(self.stage_blocks):
483
+ spatial_stride = spatial_strides[i]
484
+ temporal_stride = temporal_strides[i]
485
+ dilation = dilations[i]
486
+ planes = self.base_channels * 2**i
487
+ res_layer = self.make_res_layer(
488
+ self.block,
489
+ self.inplanes,
490
+ planes,
491
+ num_blocks,
492
+ spatial_stride=spatial_stride,
493
+ temporal_stride=temporal_stride,
494
+ dilation=dilation,
495
+ style=self.style,
496
+ norm_cfg=self.norm_cfg,
497
+ conv_cfg=self.conv_cfg,
498
+ act_cfg=self.act_cfg,
499
+ non_local=self.non_local_stages[i],
500
+ non_local_cfg=self.non_local_cfg,
501
+ inflate=self.stage_inflations[i],
502
+ inflate_style=self.inflate_style,
503
+ with_cp=with_cp,
504
+ **kwargs)
505
+ self.inplanes = planes * self.block.expansion
506
+ layer_name = f'layer{i + 1}'
507
+ self.add_module(layer_name, res_layer)
508
+ self.res_layers.append(layer_name)
509
+
510
+ self.feat_dim = self.block.expansion * self.base_channels * 2**(
511
+ len(self.stage_blocks) - 1)
512
+
513
+
514
+ # Adaptive Pool:
515
+ self.adaptive_pool = nn.AdaptiveAvgPool2d((1,1))
516
+
517
+ @staticmethod
518
+ def make_res_layer(block,
519
+ inplanes,
520
+ planes,
521
+ blocks,
522
+ spatial_stride=1,
523
+ temporal_stride=1,
524
+ dilation=1,
525
+ style='pytorch',
526
+ inflate=1,
527
+ inflate_style='3x1x1',
528
+ non_local=0,
529
+ non_local_cfg=dict(),
530
+ norm_cfg=None,
531
+ act_cfg=None,
532
+ conv_cfg=None,
533
+ with_cp=False,
534
+ **kwargs):
535
+ """Build residual layer for ResNet3D.
536
+ Args:
537
+ block (nn.Module): Residual module to be built.
538
+ inplanes (int): Number of channels for the input feature
539
+ in each block.
540
+ planes (int): Number of channels for the output feature
541
+ in each block.
542
+ blocks (int): Number of residual blocks.
543
+ spatial_stride (int | Sequence[int]): Spatial strides in
544
+ residual and conv layers. Default: 1.
545
+ temporal_stride (int | Sequence[int]): Temporal strides in
546
+ residual and conv layers. Default: 1.
547
+ dilation (int): Spacing between kernel elements. Default: 1.
548
+ style (str): ``pytorch`` or ``caffe``. If set to ``pytorch``,
549
+ the stride-two layer is the 3x3 conv layer, otherwise
550
+ the stride-two layer is the first 1x1 conv layer.
551
+ Default: ``pytorch``.
552
+ inflate (int | Sequence[int]): Determine whether to inflate
553
+ for each block. Default: 1.
554
+ inflate_style (str): ``3x1x1`` or ``3x3x3``. which determines
555
+ the kernel sizes and padding strides for conv1 and conv2
556
+ in each block. Default: '3x1x1'.
557
+ non_local (int | Sequence[int]): Determine whether to apply
558
+ non-local module in the corresponding block of each stages.
559
+ Default: 0.
560
+ non_local_cfg (dict): Config for non-local module.
561
+ Default: ``dict()``.
562
+ conv_cfg (dict | None): Config for norm layers. Default: None.
563
+ norm_cfg (dict | None): Config for norm layers. Default: None.
564
+ act_cfg (dict | None): Config for activate layers. Default: None.
565
+ with_cp (bool | None): Use checkpoint or not. Using checkpoint
566
+ will save some memory while slowing down the training speed.
567
+ Default: False.
568
+ Returns:
569
+ nn.Module: A residual layer for the given config.
570
+ """
571
+ inflate = inflate if not isinstance(inflate,
572
+ int) else (inflate, ) * blocks
573
+ non_local = non_local if not isinstance(
574
+ non_local, int) else (non_local, ) * blocks
575
+ assert len(inflate) == blocks and len(non_local) == blocks
576
+ downsample = None
577
+ if spatial_stride != 1 or inplanes != planes * block.expansion:
578
+ downsample = ConvModule(
579
+ inplanes,
580
+ planes * block.expansion,
581
+ kernel_size=1,
582
+ stride=(temporal_stride, spatial_stride, spatial_stride),
583
+ bias=False,
584
+ conv_cfg=conv_cfg,
585
+ norm_cfg=norm_cfg,
586
+ act_cfg=None)
587
+
588
+ layers = []
589
+ layers.append(
590
+ block(
591
+ inplanes,
592
+ planes,
593
+ spatial_stride=spatial_stride,
594
+ temporal_stride=temporal_stride,
595
+ dilation=dilation,
596
+ downsample=downsample,
597
+ style=style,
598
+ inflate=(inflate[0] == 1),
599
+ inflate_style=inflate_style,
600
+ non_local=(non_local[0] == 1),
601
+ non_local_cfg=non_local_cfg,
602
+ norm_cfg=norm_cfg,
603
+ conv_cfg=conv_cfg,
604
+ act_cfg=act_cfg,
605
+ with_cp=with_cp,
606
+ **kwargs))
607
+ inplanes = planes * block.expansion
608
+ for i in range(1, blocks):
609
+ layers.append(
610
+ block(
611
+ inplanes,
612
+ planes,
613
+ spatial_stride=1,
614
+ temporal_stride=1,
615
+ dilation=dilation,
616
+ style=style,
617
+ inflate=(inflate[i] == 1),
618
+ inflate_style=inflate_style,
619
+ non_local=(non_local[i] == 1),
620
+ non_local_cfg=non_local_cfg,
621
+ norm_cfg=norm_cfg,
622
+ conv_cfg=conv_cfg,
623
+ act_cfg=act_cfg,
624
+ with_cp=with_cp,
625
+ **kwargs))
626
+
627
+ return nn.Sequential(*layers)
628
+
629
+ @staticmethod
630
+ def _inflate_conv_params(conv3d, state_dict_2d, module_name_2d,
631
+ inflated_param_names):
632
+ """Inflate a conv module from 2d to 3d.
633
+ Args:
634
+ conv3d (nn.Module): The destination conv3d module.
635
+ state_dict_2d (OrderedDict): The state dict of pretrained 2d model.
636
+ module_name_2d (str): The name of corresponding conv module in the
637
+ 2d model.
638
+ inflated_param_names (list[str]): List of parameters that have been
639
+ inflated.
640
+ """
641
+ weight_2d_name = module_name_2d + '.weight'
642
+
643
+ conv2d_weight = state_dict_2d[weight_2d_name]
644
+ kernel_t = conv3d.weight.data.shape[2]
645
+
646
+ new_weight = conv2d_weight.data.unsqueeze(2).expand_as(
647
+ conv3d.weight) / kernel_t
648
+ conv3d.weight.data.copy_(new_weight)
649
+ inflated_param_names.append(weight_2d_name)
650
+
651
+ if getattr(conv3d, 'bias') is not None:
652
+ bias_2d_name = module_name_2d + '.bias'
653
+ conv3d.bias.data.copy_(state_dict_2d[bias_2d_name])
654
+ inflated_param_names.append(bias_2d_name)
655
+
656
+ @staticmethod
657
+ def _inflate_bn_params(bn3d, state_dict_2d, module_name_2d,
658
+ inflated_param_names):
659
+ """Inflate a norm module from 2d to 3d.
660
+ Args:
661
+ bn3d (nn.Module): The destination bn3d module.
662
+ state_dict_2d (OrderedDict): The state dict of pretrained 2d model.
663
+ module_name_2d (str): The name of corresponding bn module in the
664
+ 2d model.
665
+ inflated_param_names (list[str]): List of parameters that have been
666
+ inflated.
667
+ """
668
+ for param_name, param in bn3d.named_parameters():
669
+ param_2d_name = f'{module_name_2d}.{param_name}'
670
+ param_2d = state_dict_2d[param_2d_name]
671
+ if param.data.shape != param_2d.shape:
672
+ warnings.warn(f'The parameter of {module_name_2d} is not'
673
+ 'loaded due to incompatible shapes. ')
674
+ return
675
+
676
+ param.data.copy_(param_2d)
677
+ inflated_param_names.append(param_2d_name)
678
+
679
+ for param_name, param in bn3d.named_buffers():
680
+ param_2d_name = f'{module_name_2d}.{param_name}'
681
+ # some buffers like num_batches_tracked may not exist in old
682
+ # checkpoints
683
+ if param_2d_name in state_dict_2d:
684
+ param_2d = state_dict_2d[param_2d_name]
685
+ param.data.copy_(param_2d)
686
+ inflated_param_names.append(param_2d_name)
687
+
688
+ @staticmethod
689
+ def _inflate_weights(self, logger):
690
+ """Inflate the resnet2d parameters to resnet3d.
691
+ The differences between resnet3d and resnet2d mainly lie in an extra
692
+ axis of conv kernel. To utilize the pretrained parameters in 2d model,
693
+ the weight of conv2d models should be inflated to fit in the shapes of
694
+ the 3d counterpart.
695
+ Args:
696
+ logger (logging.Logger): The logger used to print
697
+ debugging information.
698
+ """
699
+
700
+ state_dict_r2d = _load_checkpoint(self.pretrained)
701
+ if 'state_dict' in state_dict_r2d:
702
+ state_dict_r2d = state_dict_r2d['state_dict']
703
+
704
+ inflated_param_names = []
705
+ for name, module in self.named_modules():
706
+ if isinstance(module, ConvModule):
707
+ # we use a ConvModule to wrap conv+bn+relu layers, thus the
708
+ # name mapping is needed
709
+ if 'downsample' in name:
710
+ # layer{X}.{Y}.downsample.conv->layer{X}.{Y}.downsample.0
711
+ original_conv_name = name + '.0'
712
+ # layer{X}.{Y}.downsample.bn->layer{X}.{Y}.downsample.1
713
+ original_bn_name = name + '.1'
714
+ else:
715
+ # layer{X}.{Y}.conv{n}.conv->layer{X}.{Y}.conv{n}
716
+ original_conv_name = name
717
+ # layer{X}.{Y}.conv{n}.bn->layer{X}.{Y}.bn{n}
718
+ original_bn_name = name.replace('conv', 'bn')
719
+ if original_conv_name + '.weight' not in state_dict_r2d:
720
+ logger.warning(f'Module not exist in the state_dict_r2d'
721
+ f': {original_conv_name}')
722
+ else:
723
+ shape_2d = state_dict_r2d[original_conv_name +
724
+ '.weight'].shape
725
+ shape_3d = module.conv.weight.data.shape
726
+ if shape_2d != shape_3d[:2] + shape_3d[3:]:
727
+ logger.warning(f'Weight shape mismatch for '
728
+ f': {original_conv_name} : '
729
+ f'3d weight shape: {shape_3d}; '
730
+ f'2d weight shape: {shape_2d}. ')
731
+ else:
732
+ self._inflate_conv_params(module.conv, state_dict_r2d,
733
+ original_conv_name,
734
+ inflated_param_names)
735
+
736
+ if original_bn_name + '.weight' not in state_dict_r2d:
737
+ logger.warning(f'Module not exist in the state_dict_r2d'
738
+ f': {original_bn_name}')
739
+ else:
740
+ self._inflate_bn_params(module.bn, state_dict_r2d,
741
+ original_bn_name,
742
+ inflated_param_names)
743
+
744
+ # check if any parameters in the 2d checkpoint are not loaded
745
+ remaining_names = set(
746
+ state_dict_r2d.keys()) - set(inflated_param_names)
747
+ if remaining_names:
748
+ logger.info(f'These parameters in the 2d checkpoint are not loaded'
749
+ f': {remaining_names}')
750
+
751
+ def inflate_weights(self, logger):
752
+ self._inflate_weights(self, logger)
753
+
754
+ def _make_stem_layer(self):
755
+ """Construct the stem layers consists of a conv+norm+act module and a
756
+ pooling layer."""
757
+ self.conv1 = ConvModule(
758
+ self.in_channels,
759
+ self.base_channels,
760
+ kernel_size=self.conv1_kernel,
761
+ stride=(self.conv1_stride_t, self.conv1_stride_s,
762
+ self.conv1_stride_s),
763
+ padding=tuple([(k - 1) // 2 for k in _triple(self.conv1_kernel)]),
764
+ bias=False,
765
+ conv_cfg=self.conv_cfg,
766
+ norm_cfg=self.norm_cfg,
767
+ act_cfg=self.act_cfg)
768
+
769
+ self.maxpool = nn.MaxPool3d(
770
+ kernel_size=(1, 3, 3),
771
+ stride=(self.pool1_stride_t, self.pool1_stride_s,
772
+ self.pool1_stride_s),
773
+ padding=(0, 1, 1))
774
+
775
+ self.pool2 = nn.MaxPool3d(kernel_size=(2, 1, 1), stride=(2, 1, 1))
776
+
777
+ def _freeze_stages(self):
778
+ """Prevent all the parameters from being optimized before
779
+ ``self.frozen_stages``."""
780
+ if self.frozen_stages >= 0:
781
+ self.conv1.eval()
782
+ for param in self.conv1.parameters():
783
+ param.requires_grad = False
784
+
785
+ for i in range(1, self.frozen_stages + 1):
786
+ m = getattr(self, f'layer{i}')
787
+ m.eval()
788
+ for param in m.parameters():
789
+ param.requires_grad = False
790
+
791
+ @staticmethod
792
+ def _init_weights(self, pretrained=None):
793
+ """Initiate the parameters either from existing checkpoint or from
794
+ scratch.
795
+ Args:
796
+ pretrained (str | None): The path of the pretrained weight. Will
797
+ override the original `pretrained` if set. The arg is added to
798
+ be compatible with mmdet. Default: None.
799
+ """
800
+ if pretrained:
801
+ self.pretrained = pretrained
802
+ if isinstance(self.pretrained, str):
803
+ logger = get_root_logger()
804
+ logger.info(f'load model from: {self.pretrained}')
805
+
806
+ if self.pretrained2d:
807
+ # Inflate 2D model into 3D model.
808
+ self.inflate_weights(logger)
809
+
810
+ else:
811
+ # Directly load 3D model.
812
+ load_checkpoint(
813
+ self, self.pretrained, strict=False, logger=logger)
814
+
815
+ elif self.pretrained is None:
816
+ for m in self.modules():
817
+ if isinstance(m, nn.Conv3d):
818
+ kaiming_init(m)
819
+ elif isinstance(m, _BatchNorm):
820
+ constant_init(m, 1)
821
+
822
+ if self.zero_init_residual:
823
+ for m in self.modules():
824
+ if isinstance(m, Bottleneck3d):
825
+ constant_init(m.conv3.bn, 0)
826
+ elif isinstance(m, BasicBlock3d):
827
+ constant_init(m.conv2.bn, 0)
828
+ else:
829
+ raise TypeError('pretrained must be a str or None')
830
+
831
+ def init_weights(self, pretrained=None):
832
+ self._init_weights(self, pretrained)
833
+
834
+ def forward(self, x):
835
+ """Defines the computation performed at every call.
836
+ Args:
837
+ x (torch.Tensor): The input data.
838
+ Returns:
839
+ torch.Tensor: The feature of the input
840
+ samples extracted by the backbone.
841
+ """
842
+ x = self.conv1(x)
843
+ if self.with_pool1:
844
+ x = self.maxpool(x)
845
+ outs = []
846
+ for i, layer_name in enumerate(self.res_layers):
847
+ res_layer = getattr(self, layer_name)
848
+ x = res_layer(x)
849
+ if i == 0 and self.with_pool2:
850
+ x = self.pool2(x)
851
+ if i in self.out_indices:
852
+ outs.append(x)
853
+ if len(outs) == 1:
854
+ out = outs[0]
855
+ out = self.adaptive_pool(out)
856
+ return out
857
+
858
+ return tuple(outs)
859
+
860
+ def train(self, mode=True):
861
+ """Set the optimization status when training."""
862
+ super().train(mode)
863
+ self._freeze_stages()
864
+ if mode and self.norm_eval:
865
+ for m in self.modules():
866
+ if isinstance(m, _BatchNorm):
867
+ m.eval()
868
+
869
+
870
+
871
+
872
+ class ResNet3dPathway(ResNet3d):
873
+ """A pathway of Slowfast based on ResNet3d.
874
+ Args:
875
+ *args (arguments): Arguments same as :class:``ResNet3d``.
876
+ lateral (bool): Determines whether to enable the lateral connection
877
+ from another pathway. Default: False.
878
+ speed_ratio (int): Speed ratio indicating the ratio between time
879
+ dimension of the fast and slow pathway, corresponding to the
880
+ ``alpha`` in the paper. Default: 8.
881
+ channel_ratio (int): Reduce the channel number of fast pathway
882
+ by ``channel_ratio``, corresponding to ``beta`` in the paper.
883
+ Default: 8.
884
+ fusion_kernel (int): The kernel size of lateral fusion.
885
+ Default: 5.
886
+ **kwargs (keyword arguments): Keywords arguments for ResNet3d.
887
+ """
888
+
889
+ def __init__(self,
890
+ *args,
891
+ lateral=False,
892
+ lateral_norm=False,
893
+ speed_ratio=8,
894
+ channel_ratio=8,
895
+ fusion_kernel=5,
896
+ **kwargs):
897
+ self.lateral = lateral
898
+ self.lateral_norm = lateral_norm
899
+ self.speed_ratio = speed_ratio
900
+ self.channel_ratio = channel_ratio
901
+ self.fusion_kernel = fusion_kernel
902
+ super().__init__(*args, **kwargs)
903
+ self.inplanes = self.base_channels
904
+ if self.lateral:
905
+ self.conv1_lateral = ConvModule(
906
+ self.inplanes // self.channel_ratio,
907
+ # https://arxiv.org/abs/1812.03982, the
908
+ # third type of lateral connection has out_channel:
909
+ # 2 * \beta * C
910
+ self.inplanes * 2 // self.channel_ratio,
911
+ kernel_size=(fusion_kernel, 1, 1),
912
+ stride=(self.speed_ratio, 1, 1),
913
+ padding=((fusion_kernel - 1) // 2, 0, 0),
914
+ bias=False,
915
+ conv_cfg=self.conv_cfg,
916
+ norm_cfg=self.norm_cfg if self.lateral_norm else None,
917
+ act_cfg=self.act_cfg if self.lateral_norm else None)
918
+
919
+ self.lateral_connections = []
920
+ for i in range(len(self.stage_blocks)):
921
+ planes = self.base_channels * 2**i
922
+ self.inplanes = planes * self.block.expansion
923
+
924
+ if lateral and i != self.num_stages - 1:
925
+ # no lateral connection needed in final stage
926
+ lateral_name = f'layer{(i + 1)}_lateral'
927
+ setattr(
928
+ self, lateral_name,
929
+ ConvModule(
930
+ self.inplanes // self.channel_ratio,
931
+ self.inplanes * 2 // self.channel_ratio,
932
+ kernel_size=(fusion_kernel, 1, 1),
933
+ stride=(self.speed_ratio, 1, 1),
934
+ padding=((fusion_kernel - 1) // 2, 0, 0),
935
+ bias=False,
936
+ conv_cfg=self.conv_cfg,
937
+ norm_cfg=self.norm_cfg if self.lateral_norm else None,
938
+ act_cfg=self.act_cfg if self.lateral_norm else None))
939
+ self.lateral_connections.append(lateral_name)
940
+
941
+ def make_res_layer(self,
942
+ block,
943
+ inplanes,
944
+ planes,
945
+ blocks,
946
+ spatial_stride=1,
947
+ temporal_stride=1,
948
+ dilation=1,
949
+ style='pytorch',
950
+ inflate=1,
951
+ inflate_style='3x1x1',
952
+ non_local=0,
953
+ non_local_cfg=dict(),
954
+ conv_cfg=None,
955
+ norm_cfg=None,
956
+ act_cfg=None,
957
+ with_cp=False):
958
+ """Build residual layer for Slowfast.
959
+ Args:
960
+ block (nn.Module): Residual module to be built.
961
+ inplanes (int): Number of channels for the input
962
+ feature in each block.
963
+ planes (int): Number of channels for the output
964
+ feature in each block.
965
+ blocks (int): Number of residual blocks.
966
+ spatial_stride (int | Sequence[int]): Spatial strides
967
+ in residual and conv layers. Default: 1.
968
+ temporal_stride (int | Sequence[int]): Temporal strides in
969
+ residual and conv layers. Default: 1.
970
+ dilation (int): Spacing between kernel elements. Default: 1.
971
+ style (str): ``pytorch`` or ``caffe``. If set to ``pytorch``,
972
+ the stride-two layer is the 3x3 conv layer,
973
+ otherwise the stride-two layer is the first 1x1 conv layer.
974
+ Default: ``pytorch``.
975
+ inflate (int | Sequence[int]): Determine whether to inflate
976
+ for each block. Default: 1.
977
+ inflate_style (str): ``3x1x1`` or ``3x3x3``. which determines
978
+ the kernel sizes and padding strides for conv1 and
979
+ conv2 in each block. Default: ``3x1x1``.
980
+ non_local (int | Sequence[int]): Determine whether to apply
981
+ non-local module in the corresponding block of each stages.
982
+ Default: 0.
983
+ non_local_cfg (dict): Config for non-local module.
984
+ Default: ``dict()``.
985
+ conv_cfg (dict | None): Config for conv layers. Default: None.
986
+ norm_cfg (dict | None): Config for norm layers. Default: None.
987
+ act_cfg (dict | None): Config for activate layers. Default: None.
988
+ with_cp (bool): Use checkpoint or not. Using checkpoint will save
989
+ some memory while slowing down the training speed.
990
+ Default: False.
991
+ Returns:
992
+ nn.Module: A residual layer for the given config.
993
+ """
994
+ inflate = inflate if not isinstance(inflate,
995
+ int) else (inflate, ) * blocks
996
+ non_local = non_local if not isinstance(
997
+ non_local, int) else (non_local, ) * blocks
998
+ assert len(inflate) == blocks and len(non_local) == blocks
999
+ if self.lateral:
1000
+ lateral_inplanes = inplanes * 2 // self.channel_ratio
1001
+ else:
1002
+ lateral_inplanes = 0
1003
+ if (spatial_stride != 1
1004
+ or (inplanes + lateral_inplanes) != planes * block.expansion):
1005
+ downsample = ConvModule(
1006
+ inplanes + lateral_inplanes,
1007
+ planes * block.expansion,
1008
+ kernel_size=1,
1009
+ stride=(temporal_stride, spatial_stride, spatial_stride),
1010
+ bias=False,
1011
+ conv_cfg=conv_cfg,
1012
+ norm_cfg=norm_cfg,
1013
+ act_cfg=None)
1014
+ else:
1015
+ downsample = None
1016
+
1017
+ layers = []
1018
+ layers.append(
1019
+ block(
1020
+ inplanes + lateral_inplanes,
1021
+ planes,
1022
+ spatial_stride,
1023
+ temporal_stride,
1024
+ dilation,
1025
+ downsample,
1026
+ style=style,
1027
+ inflate=(inflate[0] == 1),
1028
+ inflate_style=inflate_style,
1029
+ non_local=(non_local[0] == 1),
1030
+ non_local_cfg=non_local_cfg,
1031
+ conv_cfg=conv_cfg,
1032
+ norm_cfg=norm_cfg,
1033
+ act_cfg=act_cfg,
1034
+ with_cp=with_cp))
1035
+ inplanes = planes * block.expansion
1036
+
1037
+ for i in range(1, blocks):
1038
+ layers.append(
1039
+ block(
1040
+ inplanes,
1041
+ planes,
1042
+ 1,
1043
+ 1,
1044
+ dilation,
1045
+ style=style,
1046
+ inflate=(inflate[i] == 1),
1047
+ inflate_style=inflate_style,
1048
+ non_local=(non_local[i] == 1),
1049
+ non_local_cfg=non_local_cfg,
1050
+ conv_cfg=conv_cfg,
1051
+ norm_cfg=norm_cfg,
1052
+ act_cfg=act_cfg,
1053
+ with_cp=with_cp))
1054
+
1055
+ return nn.Sequential(*layers)
1056
+
1057
+ def inflate_weights(self, logger):
1058
+ """Inflate the resnet2d parameters to resnet3d pathway.
1059
+ The differences between resnet3d and resnet2d mainly lie in an extra
1060
+ axis of conv kernel. To utilize the pretrained parameters in 2d model,
1061
+ the weight of conv2d models should be inflated to fit in the shapes of
1062
+ the 3d counterpart. For pathway the ``lateral_connection`` part should
1063
+ not be inflated from 2d weights.
1064
+ Args:
1065
+ logger (logging.Logger): The logger used to print
1066
+ debugging information.
1067
+ """
1068
+
1069
+ state_dict_r2d = _load_checkpoint(self.pretrained)
1070
+ if 'state_dict' in state_dict_r2d:
1071
+ state_dict_r2d = state_dict_r2d['state_dict']
1072
+
1073
+ inflated_param_names = []
1074
+ for name, module in self.named_modules():
1075
+ if 'lateral' in name:
1076
+ continue
1077
+ if isinstance(module, ConvModule):
1078
+ # we use a ConvModule to wrap conv+bn+relu layers, thus the
1079
+ # name mapping is needed
1080
+ if 'downsample' in name:
1081
+ # layer{X}.{Y}.downsample.conv->layer{X}.{Y}.downsample.0
1082
+ original_conv_name = name + '.0'
1083
+ # layer{X}.{Y}.downsample.bn->layer{X}.{Y}.downsample.1
1084
+ original_bn_name = name + '.1'
1085
+ else:
1086
+ # layer{X}.{Y}.conv{n}.conv->layer{X}.{Y}.conv{n}
1087
+ original_conv_name = name
1088
+ # layer{X}.{Y}.conv{n}.bn->layer{X}.{Y}.bn{n}
1089
+ original_bn_name = name.replace('conv', 'bn')
1090
+ if original_conv_name + '.weight' not in state_dict_r2d:
1091
+ logger.warning(f'Module not exist in the state_dict_r2d'
1092
+ f': {original_conv_name}')
1093
+ else:
1094
+ self._inflate_conv_params(module.conv, state_dict_r2d,
1095
+ original_conv_name,
1096
+ inflated_param_names)
1097
+ if original_bn_name + '.weight' not in state_dict_r2d:
1098
+ logger.warning(f'Module not exist in the state_dict_r2d'
1099
+ f': {original_bn_name}')
1100
+ else:
1101
+ self._inflate_bn_params(module.bn, state_dict_r2d,
1102
+ original_bn_name,
1103
+ inflated_param_names)
1104
+
1105
+ # check if any parameters in the 2d checkpoint are not loaded
1106
+ remaining_names = set(
1107
+ state_dict_r2d.keys()) - set(inflated_param_names)
1108
+ if remaining_names:
1109
+ logger.info(f'These parameters in the 2d checkpoint are not loaded'
1110
+ f': {remaining_names}')
1111
+
1112
+ def _inflate_conv_params(self, conv3d, state_dict_2d, module_name_2d,
1113
+ inflated_param_names):
1114
+ """Inflate a conv module from 2d to 3d.
1115
+ The differences of conv modules betweene 2d and 3d in Pathway
1116
+ mainly lie in the inplanes due to lateral connections. To fit the
1117
+ shapes of the lateral connection counterpart, it will expand
1118
+ parameters by concatting conv2d parameters and extra zero paddings.
1119
+ Args:
1120
+ conv3d (nn.Module): The destination conv3d module.
1121
+ state_dict_2d (OrderedDict): The state dict of pretrained 2d model.
1122
+ module_name_2d (str): The name of corresponding conv module in the
1123
+ 2d model.
1124
+ inflated_param_names (list[str]): List of parameters that have been
1125
+ inflated.
1126
+ """
1127
+ weight_2d_name = module_name_2d + '.weight'
1128
+ conv2d_weight = state_dict_2d[weight_2d_name]
1129
+ old_shape = conv2d_weight.shape
1130
+ new_shape = conv3d.weight.data.shape
1131
+ kernel_t = new_shape[2]
1132
+
1133
+ if new_shape[1] != old_shape[1]:
1134
+ if new_shape[1] < old_shape[1]:
1135
+ warnings.warn(f'The parameter of {module_name_2d} is not'
1136
+ 'loaded due to incompatible shapes. ')
1137
+ return
1138
+ # Inplanes may be different due to lateral connections
1139
+ new_channels = new_shape[1] - old_shape[1]
1140
+ pad_shape = old_shape
1141
+ pad_shape = pad_shape[:1] + (new_channels, ) + pad_shape[2:]
1142
+ # Expand parameters by concat extra channels
1143
+ conv2d_weight = torch.cat(
1144
+ (conv2d_weight,
1145
+ torch.zeros(pad_shape).type_as(conv2d_weight).to(
1146
+ conv2d_weight.device)),
1147
+ dim=1)
1148
+
1149
+ new_weight = conv2d_weight.data.unsqueeze(2).expand_as(
1150
+ conv3d.weight) / kernel_t
1151
+ conv3d.weight.data.copy_(new_weight)
1152
+ inflated_param_names.append(weight_2d_name)
1153
+
1154
+ if getattr(conv3d, 'bias') is not None:
1155
+ bias_2d_name = module_name_2d + '.bias'
1156
+ conv3d.bias.data.copy_(state_dict_2d[bias_2d_name])
1157
+ inflated_param_names.append(bias_2d_name)
1158
+
1159
+ def _freeze_stages(self):
1160
+ """Prevent all the parameters from being optimized before
1161
+ `self.frozen_stages`."""
1162
+ if self.frozen_stages >= 0:
1163
+ self.conv1.eval()
1164
+ for param in self.conv1.parameters():
1165
+ param.requires_grad = False
1166
+
1167
+ for i in range(1, self.frozen_stages + 1):
1168
+ m = getattr(self, f'layer{i}')
1169
+ m.eval()
1170
+ for param in m.parameters():
1171
+ param.requires_grad = False
1172
+
1173
+ if i != len(self.res_layers) and self.lateral:
1174
+ # No fusion needed in the final stage
1175
+ lateral_name = self.lateral_connections[i - 1]
1176
+ conv_lateral = getattr(self, lateral_name)
1177
+ conv_lateral.eval()
1178
+ for param in conv_lateral.parameters():
1179
+ param.requires_grad = False
1180
+
1181
+ def init_weights(self, pretrained=None):
1182
+ """Initiate the parameters either from existing checkpoint or from
1183
+ scratch."""
1184
+ if pretrained:
1185
+ self.pretrained = pretrained
1186
+
1187
+ # Override the init_weights of i3d
1188
+ super().init_weights()
1189
+ for module_name in self.lateral_connections:
1190
+ layer = getattr(self, module_name)
1191
+ for m in layer.modules():
1192
+ if isinstance(m, (nn.Conv3d, nn.Conv2d)):
1193
+ kaiming_init(m)
1194
+
1195
+
1196
+ pathway_cfg = {
1197
+ 'resnet3d': ResNet3dPathway,
1198
+ # TODO: BNInceptionPathway
1199
+ }
1200
+
1201
+
1202
+ def build_pathway(cfg, *args, **kwargs):
1203
+ """Build pathway.
1204
+ Args:
1205
+ cfg (None or dict): cfg should contain:
1206
+ - type (str): identify conv layer type.
1207
+ Returns:
1208
+ nn.Module: Created pathway.
1209
+ """
1210
+ if not (isinstance(cfg, dict) and 'type' in cfg):
1211
+ raise TypeError('cfg must be a dict containing the key "type"')
1212
+ cfg_ = cfg.copy()
1213
+
1214
+ pathway_type = cfg_.pop('type')
1215
+ if pathway_type not in pathway_cfg:
1216
+ raise KeyError(f'Unrecognized pathway type {pathway_type}')
1217
+
1218
+ pathway_cls = pathway_cfg[pathway_type]
1219
+ pathway = pathway_cls(*args, **kwargs, **cfg_)
1220
+
1221
+ return pathway
1222
+
1223
+
1224
+
1225
+ """
1226
+ CAVP: Video Encoder
1227
+ """
1228
+
1229
+
1230
+ class ResNet3dSlowOnly(ResNet3dPathway):
1231
+ """SlowOnly backbone based on ResNet3dPathway.
1232
+ Args:
1233
+ *args (arguments): Arguments same as :class:`ResNet3dPathway`.
1234
+ conv1_kernel (Sequence[int]): Kernel size of the first conv layer.
1235
+ Default: (1, 7, 7).
1236
+ conv1_stride_t (int): Temporal stride of the first conv layer.
1237
+ Default: 1.
1238
+ pool1_stride_t (int): Temporal stride of the first pooling layer.
1239
+ Default: 1.
1240
+ inflate (Sequence[int]): Inflate Dims of each block.
1241
+ Default: (0, 0, 1, 1).
1242
+ **kwargs (keyword arguments): Keywords arguments for
1243
+ :class:`ResNet3dPathway`.
1244
+ """
1245
+
1246
+ def __init__(self,
1247
+ *args,
1248
+ lateral=False,
1249
+ conv1_kernel=(1, 7, 7),
1250
+ conv1_stride_t=1,
1251
+ pool1_stride_t=1,
1252
+ inflate=(0, 0, 1, 1),
1253
+ with_pool2=False,
1254
+ **kwargs):
1255
+ super().__init__(
1256
+ *args,
1257
+ lateral=lateral,
1258
+ conv1_kernel=conv1_kernel,
1259
+ conv1_stride_t=conv1_stride_t,
1260
+ pool1_stride_t=pool1_stride_t,
1261
+ inflate=inflate,
1262
+ with_pool2=with_pool2,
1263
+ **kwargs)
1264
+
1265
+ assert not self.lateral
1266
+
1267
+
1268
+ class BasicBlock(nn.Module):
1269
+ """Basic Block for resnet 18 and resnet 34
1270
+ """
1271
+
1272
+ #BasicBlock and BottleNeck block
1273
+ #have different output size
1274
+ #we use class attribute expansion
1275
+ #to distinct
1276
+ expansion = 1
1277
+
1278
+ def __init__(self, in_channels, out_channels, stride=1):
1279
+ super().__init__()
1280
+
1281
+ #residual function
1282
+ self.residual_function = nn.Sequential(
1283
+ nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1, bias=False),
1284
+ nn.BatchNorm2d(out_channels),
1285
+ nn.ReLU(inplace=True),
1286
+ nn.Conv2d(out_channels, out_channels * BasicBlock.expansion, kernel_size=3, padding=1, bias=False),
1287
+ nn.BatchNorm2d(out_channels * BasicBlock.expansion)
1288
+ )
1289
+
1290
+ #shortcut
1291
+ self.shortcut = nn.Sequential()
1292
+
1293
+ #the shortcut output dimension is not the same with residual function
1294
+ #use 1*1 convolution to match the dimension
1295
+ if stride != 1 or in_channels != BasicBlock.expansion * out_channels:
1296
+ self.shortcut = nn.Sequential(
1297
+ nn.Conv2d(in_channels, out_channels * BasicBlock.expansion, kernel_size=1, stride=stride, bias=False),
1298
+ nn.BatchNorm2d(out_channels * BasicBlock.expansion)
1299
+ )
1300
+
1301
+ def forward(self, x):
1302
+ return nn.ReLU(inplace=True)(self.residual_function(x) + self.shortcut(x))
1303
+
1304
+ class BottleNeck(nn.Module):
1305
+ """Residual block for resnet over 50 layers
1306
+ """
1307
+ expansion = 4
1308
+ def __init__(self, in_channels, out_channels, stride=1):
1309
+ super().__init__()
1310
+ self.residual_function = nn.Sequential(
1311
+ nn.Conv2d(in_channels, out_channels, kernel_size=1, bias=False),
1312
+ nn.BatchNorm2d(out_channels),
1313
+ nn.ReLU(inplace=True),
1314
+ nn.Conv2d(out_channels, out_channels, stride=stride, kernel_size=3, padding=1, bias=False),
1315
+ nn.BatchNorm2d(out_channels),
1316
+ nn.ReLU(inplace=True),
1317
+ nn.Conv2d(out_channels, out_channels * BottleNeck.expansion, kernel_size=1, bias=False),
1318
+ nn.BatchNorm2d(out_channels * BottleNeck.expansion),
1319
+ )
1320
+
1321
+ self.shortcut = nn.Sequential()
1322
+
1323
+ if stride != 1 or in_channels != out_channels * BottleNeck.expansion:
1324
+ self.shortcut = nn.Sequential(
1325
+ nn.Conv2d(in_channels, out_channels * BottleNeck.expansion, stride=stride, kernel_size=1, bias=False),
1326
+ nn.BatchNorm2d(out_channels * BottleNeck.expansion)
1327
+ )
1328
+
1329
+ def forward(self, x):
1330
+ return nn.ReLU(inplace=True)(self.residual_function(x) + self.shortcut(x))
1331
+
1332
+ class ResNet(nn.Module):
1333
+
1334
+ def __init__(self, block, num_block, num_classes=100, truncate_sec=4):
1335
+ super().__init__()
1336
+
1337
+ self.in_channels = 64
1338
+
1339
+ self.conv1 = nn.Sequential(
1340
+ nn.Conv2d(1, 64, kernel_size=3, padding=1, bias=False),
1341
+ nn.BatchNorm2d(64),
1342
+ nn.ReLU(inplace=True))
1343
+ #we use a different inputsize than the original paper
1344
+ #so conv2_x's stride is 1
1345
+ self.conv2_x = self._make_layer(block, 64, num_block[0], 2)
1346
+ self.conv3_x = self._make_layer(block, 128, num_block[1], 2)
1347
+ self.conv4_x = self._make_layer(block, 256, num_block[2], 2)
1348
+ self.conv5_x = self._make_layer(block, 512, num_block[3], 2)
1349
+
1350
+ assert truncate_sec == 4 or truncate_sec == 8 or truncate_sec == 10
1351
+ if truncate_sec == 4:
1352
+ self.avg_pool = nn.AdaptiveAvgPool2d((1, 16))
1353
+ elif truncate_sec == 8:
1354
+ self.avg_pool = nn.AdaptiveAvgPool2d((1, 32))
1355
+ elif truncate_sec == 10:
1356
+ self.avg_pool = nn.AdaptiveAvgPool2d((1, 40))
1357
+
1358
+
1359
+ def _make_layer(self, block, out_channels, num_blocks, stride):
1360
+ """make resnet layers(by layer i didnt mean this 'layer' was the
1361
+ same as a neuron netowork layer, ex. conv layer), one layer may
1362
+ contain more than one residual block
1363
+ Args:
1364
+ block: block type, basic block or bottle neck block
1365
+ out_channels: output depth channel number of this layer
1366
+ num_blocks: how many blocks per layer
1367
+ stride: the stride of the first block of this layer
1368
+ Return:
1369
+ return a resnet layer
1370
+ """
1371
+
1372
+ # we have num_block blocks per layer, the first block
1373
+ # could be 1 or 2, other blocks would always be 1
1374
+ strides = [stride] + [1] * (num_blocks - 1)
1375
+ layers = []
1376
+ for stride in strides:
1377
+ layers.append(block(self.in_channels, out_channels, stride))
1378
+ self.in_channels = out_channels * block.expansion
1379
+
1380
+ return nn.Sequential(*layers)
1381
+
1382
+ def forward(self, x):
1383
+ output = self.conv1(x)
1384
+ output = self.conv2_x(output)
1385
+ output = self.conv3_x(output)
1386
+ output = self.conv4_x(output)
1387
+ output = self.conv5_x(output)
1388
+ output = self.avg_pool(output)
1389
+ bs, c, _, t = output.shape
1390
+ output = output.view(bs, c, t)
1391
+ return output
1392
+
1393
+
1394
+
1395
+ def _ntuple(n):
1396
+ def parse(x):
1397
+ if isinstance(x, collections.abc.Iterable):
1398
+ return x
1399
+ return tuple(repeat(x, n))
1400
+ return parse
1401
+
1402
+
1403
+
1404
+ """
1405
+ Cnn14: Spec Encoder
1406
+ """
1407
+
1408
+ def interpolate(x, ratio):
1409
+ """Interpolate data in time domain. This is used to compensate the
1410
+ resolution reduction in downsampling of a CNN.
1411
+ Args:
1412
+ x: (batch_size, time_steps, classes_num)
1413
+ ratio: int, ratio to interpolate
1414
+ Returns:
1415
+ upsampled: (batch_size, time_steps * ratio, classes_num)
1416
+ """
1417
+ (batch_size, time_steps, classes_num) = x.shape
1418
+ upsampled = x[:, :, None, :].repeat(1, 1, ratio, 1)
1419
+ upsampled = upsampled.reshape(batch_size, time_steps * ratio, classes_num)
1420
+ return upsampled
1421
+
1422
+ def init_bn(bn):
1423
+ """Initialize a Batchnorm layer. """
1424
+ bn.bias.data.fill_(0.)
1425
+ bn.weight.data.fill_(1.)
1426
+
1427
+
1428
+ def init_layer(layer):
1429
+ """Initialize a Linear or Convolutional layer. """
1430
+ nn.init.xavier_uniform_(layer.weight)
1431
+
1432
+ if hasattr(layer, 'bias'):
1433
+ if layer.bias is not None:
1434
+ layer.bias.data.fill_(0.)
1435
+
1436
+
1437
+ class ConvBlock(nn.Module):
1438
+ def __init__(self, in_channels, out_channels):
1439
+
1440
+ super(ConvBlock, self).__init__()
1441
+
1442
+ self.conv1 = nn.Conv2d(in_channels=in_channels,
1443
+ out_channels=out_channels,
1444
+ kernel_size=(3, 3), stride=(1, 1),
1445
+ padding=(1, 1), bias=False)
1446
+
1447
+ self.conv2 = nn.Conv2d(in_channels=out_channels,
1448
+ out_channels=out_channels,
1449
+ kernel_size=(3, 3), stride=(1, 1),
1450
+ padding=(1, 1), bias=False)
1451
+
1452
+ self.bn1 = nn.BatchNorm2d(out_channels)
1453
+ self.bn2 = nn.BatchNorm2d(out_channels)
1454
+
1455
+ self.init_weight()
1456
+
1457
+ def init_weight(self):
1458
+ init_layer(self.conv1)
1459
+ init_layer(self.conv2)
1460
+ init_bn(self.bn1)
1461
+ init_bn(self.bn2)
1462
+
1463
+
1464
+ def forward(self, input, pool_size=(2, 2), pool_type='avg'):
1465
+
1466
+ x = input
1467
+ x = F.relu_(self.bn1(self.conv1(x)))
1468
+ x = F.relu_(self.bn2(self.conv2(x)))
1469
+ if pool_type == 'max':
1470
+ x = F.max_pool2d(x, kernel_size=pool_size)
1471
+ elif pool_type == 'avg':
1472
+ x = F.avg_pool2d(x, kernel_size=pool_size)
1473
+ elif pool_type == 'avg+max':
1474
+ x1 = F.avg_pool2d(x, kernel_size=pool_size)
1475
+ x2 = F.max_pool2d(x, kernel_size=pool_size)
1476
+ x = x1 + x2
1477
+ else:
1478
+ raise Exception('Incorrect argument!')
1479
+
1480
+ return x
1481
+
1482
+
1483
+
1484
+ class Cnn14(nn.Module):
1485
+ def __init__(self, embed_dim, enable_fusion=False, fusion_type='None'):
1486
+ super(Cnn14, self).__init__()
1487
+
1488
+ self.enable_fusion = enable_fusion
1489
+ self.fusion_type = fusion_type
1490
+
1491
+ self.bn = nn.BatchNorm2d(128)
1492
+
1493
+ if (self.enable_fusion) and (self.fusion_type == 'channel_map'):
1494
+ self.conv_block1 = ConvBlock(in_channels=4, out_channels=64)
1495
+ else:
1496
+ self.conv_block1 = ConvBlock(in_channels=1, out_channels=64)
1497
+ self.conv_block2 = ConvBlock(in_channels=64, out_channels=128)
1498
+ self.conv_block3 = ConvBlock(in_channels=128, out_channels=256)
1499
+ self.conv_block4 = ConvBlock(in_channels=256, out_channels=512)
1500
+ self.conv_block5 = ConvBlock(in_channels=512, out_channels=1024)
1501
+ self.conv_block6 = ConvBlock(in_channels=1024, out_channels=2048)
1502
+
1503
+ self.fc1 = nn.Linear(2048, 2048, bias=True)
1504
+ self.final_project = nn.Linear(2048, embed_dim, bias=True)
1505
+
1506
+ self.init_weight()
1507
+
1508
+ def init_weight(self):
1509
+ init_bn(self.bn)
1510
+ init_layer(self.fc1)
1511
+ init_layer(self.final_project)
1512
+
1513
+ def forward(self, input, mixup_lambda=None, device=None):
1514
+ """
1515
+ Input: (batch_size, data_length)"""
1516
+
1517
+ x = input
1518
+ x = x.transpose(1, 3)
1519
+ x = self.bn(x)
1520
+ x = x.transpose(1, 3)
1521
+
1522
+ x = self.conv_block1(x, pool_size=(2, 2), pool_type='avg')
1523
+ x = F.dropout(x, p=0.2, training=self.training)
1524
+ x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg')
1525
+ x = F.dropout(x, p=0.2, training=self.training)
1526
+ x = self.conv_block3(x, pool_size=(2, 2), pool_type='avg')
1527
+ x = F.dropout(x, p=0.2, training=self.training)
1528
+ x = self.conv_block4(x, pool_size=(2, 2), pool_type='avg')
1529
+ x = F.dropout(x, p=0.2, training=self.training)
1530
+ x = self.conv_block5(x, pool_size=(1, 2), pool_type='avg')
1531
+ x = F.dropout(x, p=0.2, training=self.training)
1532
+ x = self.conv_block6(x, pool_size=(1, 1), pool_type='avg')
1533
+ x = F.dropout(x, p=0.2, training=self.training)
1534
+ x = torch.mean(x, dim=3)
1535
+
1536
+ latent_x1 = F.max_pool1d(x, kernel_size=3, stride=1, padding=1)
1537
+ latent_x2 = F.avg_pool1d(x, kernel_size=3, stride=1, padding=1)
1538
+ latent_x = latent_x1 + latent_x2
1539
+ latent_x = latent_x.transpose(1, 2)
1540
+ latent_x = F.relu_(self.fc1(latent_x))
1541
+ x = F.relu_(self.fc1(latent_x))
1542
+ output = self.final_project(x)
1543
+ return output
1544
+
1545
+
cavp_util.py ADDED
@@ -0,0 +1,150 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import subprocess
3
+ from pathlib import Path
4
+ import os
5
+ import cv2
6
+ import numpy as np
7
+ import torchvision.transforms as transforms
8
+ from PIL import Image
9
+ from tqdm import tqdm
10
+ from omegaconf import OmegaConf
11
+ import importlib
12
+
13
+
14
+ def which_ffmpeg() -> str:
15
+ '''Determines the path to ffmpeg library
16
+
17
+ Returns:
18
+ str -- path to the library
19
+ '''
20
+ result = subprocess.run(['which', 'ffmpeg'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
21
+ ffmpeg_path = result.stdout.decode('utf-8').replace('\n', '')
22
+ return ffmpeg_path
23
+
24
+ def reencode_video_with_diff_fps(video_path: str, tmp_path: str, extraction_fps: int, start_second, truncate_second) -> str:
25
+ '''Reencodes the video given the path and saves it to the tmp_path folder.
26
+
27
+ Args:
28
+ video_path (str): original video
29
+ tmp_path (str): the folder where tmp files are stored (will be appended with a proper filename).
30
+ extraction_fps (int): target fps value
31
+
32
+ Returns:
33
+ str: The path where the tmp file is stored. To be used to load the video from
34
+ '''
35
+ assert which_ffmpeg() != '', 'Is ffmpeg installed? Check if the conda environment is activated.'
36
+ os.makedirs(tmp_path, exist_ok=True)
37
+
38
+ # form the path to tmp directory
39
+ new_path = os.path.join(tmp_path, f'{Path(video_path).stem}_new_fps_{str(extraction_fps)}_truncate_{start_second}_{truncate_second}.mp4')
40
+ cmd = f'{which_ffmpeg()} -hide_banner -loglevel panic '
41
+ cmd += f'-y -ss {start_second} -t {truncate_second} -i {video_path} -an -filter:v fps=fps={extraction_fps} {new_path}'
42
+ subprocess.call(cmd.split())
43
+ return new_path
44
+
45
+ def instantiate_from_config(config, reload=False):
46
+ if not "target" in config:
47
+ if config == '__is_first_stage__':
48
+ return None
49
+ elif config == "__is_unconditional__":
50
+ return None
51
+ raise KeyError("Expected key `target` to instantiate.")
52
+ return get_obj_from_str(config["target"], reload=reload)(**config.get("params", dict()))
53
+
54
+ def get_obj_from_str(string, reload=False):
55
+ module, cls = string.rsplit(".", 1)
56
+ if reload:
57
+ module_imp = importlib.import_module(module)
58
+ importlib.reload(module_imp)
59
+ return getattr(importlib.import_module(module, package=None), cls)
60
+
61
+
62
+ class Extract_CAVP_Features(torch.nn.Module):
63
+
64
+ def __init__(self, device=None, tmp_path="./", video_shape=(224,224), config_path=None, ckpt_path=None):
65
+ super(Extract_CAVP_Features, self).__init__()
66
+ self.fps = 4
67
+ self.batch_size = 40
68
+ self.device = device
69
+ self.tmp_path = tmp_path
70
+
71
+ # Initalize CAVP model:
72
+ config = OmegaConf.load(config_path)
73
+ self.stage1_model = instantiate_from_config(config.model).to(device)
74
+
75
+ # Loading Model from:
76
+ assert ckpt_path is not None
77
+ self.init_first_from_ckpt(ckpt_path)
78
+ self.stage1_model.eval()
79
+
80
+ # Transform:
81
+ self.img_transform = transforms.Compose([
82
+ transforms.Resize(video_shape),
83
+ transforms.ToTensor(),
84
+ ])
85
+
86
+
87
+ def init_first_from_ckpt(self, path):
88
+ model = torch.load(path, map_location="cpu")
89
+ if "state_dict" in list(model.keys()):
90
+ model = model["state_dict"]
91
+ # Remove: module prefix
92
+ new_model = {}
93
+ for key in model.keys():
94
+ new_key = key.replace("module.","")
95
+ new_model[new_key] = model[key]
96
+ self.stage1_model.load_state_dict(new_model, strict=False)
97
+
98
+
99
+ @torch.no_grad()
100
+ def forward(self, video_path, tmp_path="./tmp_folder"):
101
+ start_second = 0
102
+ truncate_second = 10
103
+ self.tmp_path = tmp_path
104
+
105
+ # Load the video, change fps:
106
+ video_path_low_fps = reencode_video_with_diff_fps(video_path, self.tmp_path, self.fps, start_second, truncate_second)
107
+
108
+ # read the video:
109
+ cap = cv2.VideoCapture(video_path_low_fps)
110
+
111
+ feat_batch_list = []
112
+ video_feats = []
113
+ first_frame = True
114
+ # pbar = tqdm(cap.get(7))
115
+ i = 0
116
+ while cap.isOpened():
117
+ i += 1
118
+ # pbar.set_description("Processing Frames: {} Total: {}".format(i, cap.get(7)))
119
+ frames_exists, rgb = cap.read()
120
+
121
+ if first_frame:
122
+ if not frames_exists:
123
+ continue
124
+ first_frame = False
125
+
126
+ if frames_exists:
127
+ rgb = cv2.cvtColor(rgb, cv2.COLOR_BGR2RGB)
128
+ rgb_tensor = self.img_transform(Image.fromarray(rgb)).unsqueeze(0).to(self.device)
129
+ feat_batch_list.append(rgb_tensor) # 32 x 3 x 224 x 224
130
+
131
+ # Forward:
132
+ if len(feat_batch_list) == self.batch_size:
133
+ # Stage1 Model:
134
+ input_feats = torch.cat(feat_batch_list,0).unsqueeze(0).to(self.device)
135
+ contrastive_video_feats = self.stage1_model.encode_video(input_feats, normalize=True, pool=False)
136
+ video_feats.extend(contrastive_video_feats.detach().cpu().numpy())
137
+ feat_batch_list = []
138
+ else:
139
+ if len(feat_batch_list) != 0:
140
+ input_feats = torch.cat(feat_batch_list,0).unsqueeze(0).to(self.device)
141
+ contrastive_video_feats = self.stage1_model.encode_video(input_feats, normalize=True, pool=False)
142
+ video_feats.extend(contrastive_video_feats.detach().cpu().numpy())
143
+ cap.release()
144
+ break
145
+
146
+ # Remove the file
147
+ os.remove(video_path_low_fps)
148
+ video_contrastive_feats = np.concatenate(video_feats)
149
+ return video_contrastive_feats
150
+
dataset.py ADDED
@@ -0,0 +1,273 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+
3
+ import numpy as np
4
+ import torch
5
+ import random
6
+ import math
7
+
8
+ from torch.utils.data import Dataset
9
+
10
+ class audio_video_spec_fullset_Dataset(Dataset):
11
+ # Only Load audio dataset: for training Stage1: Audio Npy Dataset
12
+ def __init__(self, split, data_dir):
13
+ super().__init__()
14
+ debug_num=False
15
+
16
+ if split == "train":
17
+ self.split = "train"
18
+ elif split == "valid" or split == 'test':
19
+ self.split = "test"
20
+
21
+ # Default params:
22
+ self.min_duration = 2
23
+ self.sr = 16000
24
+ self.duration = 10
25
+ self.truncate = 130560
26
+ self.fps = 4
27
+ self.fix_frames = False
28
+ self.hop_len = 160
29
+ self.onset_truncate = 120
30
+
31
+
32
+ # spec_dir: spectrogram path
33
+ # feat_dir: CAVP feature path
34
+ # fbank_dir: fbank feature path
35
+ # onset_dir: onset feature path
36
+ dataset_spec_dir = os.path.join(data_dir, "melspec", self.split)
37
+ dataset_feat_dir = os.path.join(data_dir, "cavp_feats", self.split)
38
+ dataset_fbank_dir = os.path.join(data_dir, "fbank", self.split)
39
+ dataset_onset_dir = os.path.join(data_dir, "onset_feats", "train")
40
+ list_onset = os.listdir(dataset_onset_dir)
41
+ list_onset = list(map(lambda x: x.split('.')[0], list_onset))
42
+
43
+
44
+ with open(os.path.join(data_dir, '{}_list.txt'.format(self.split)), "r") as f:
45
+ data_list = f.readlines()
46
+ data_list = list(map(lambda x: x.strip(), data_list))
47
+ data_list = list(set(data_list) & set(list_onset))
48
+
49
+ spec_list = list(map(lambda x: os.path.join(dataset_spec_dir, x) + ".npy", data_list)) # spec
50
+ feat_list = list(map(lambda x: os.path.join(dataset_feat_dir, x) + ".npz", data_list)) # feat
51
+ fbank_list = list(map(lambda x: os.path.join(dataset_fbank_dir, x) + ".npy", data_list)) # fbank
52
+ onset_list = list(map(lambda x: os.path.join(dataset_onset_dir, x) + ".npy", data_list)) # onset
53
+
54
+
55
+ # Merge Data:
56
+ self.data_list = data_list
57
+ self.spec_list = spec_list
58
+ self.feat_list = feat_list
59
+ self.fbank_list = fbank_list
60
+ self.onset_list = onset_list
61
+
62
+
63
+ assert len(self.data_list) == len(self.spec_list) == len(self.feat_list)
64
+
65
+
66
+ shuffle_idx = np.random.permutation(np.arange(len(self.data_list)))
67
+ self.data_list = [self.data_list[i] for i in shuffle_idx]
68
+ self.spec_list = [self.spec_list[i] for i in shuffle_idx]
69
+ self.feat_list = [self.feat_list[i] for i in shuffle_idx]
70
+ self.fbank_list = [self.fbank_list[i] for i in shuffle_idx]
71
+ self.onset_list = [self.onset_list[i] for i in shuffle_idx]
72
+
73
+
74
+ if debug_num:
75
+ self.data_list = self.data_list[:debug_num]
76
+ self.spec_list = self.spec_list[:debug_num]
77
+ self.feat_list = self.feat_list[:debug_num]
78
+ self.fbank_list = self.fbank_list[:debug_num]
79
+ self.onset_list = self.onset_list[:debug_num]
80
+ print('Split: {} Sample Num: {}'.format(split, len(self.data_list)))
81
+
82
+
83
+ def __len__(self):
84
+ return len(self.data_list)
85
+
86
+
87
+ def load_spec_and_feat(self, spec_path, video_feat_path, fbank_path, onset_path):
88
+ """Load audio spec and video feat"""
89
+ spec_raw = np.load(spec_path).astype(np.float32).T # channel: 1
90
+ video_feat = np.load(video_feat_path)['arr_0'].astype(np.float32)
91
+ fbank = np.load(fbank_path).astype(np.float32)
92
+ onset = np.load(onset_path).astype(np.float32).reshape(-1)
93
+
94
+
95
+ # Padding the samples:
96
+ spec_len = self.sr * self.duration / self.hop_len
97
+ fbank_len = int(spec_len / spec_raw.shape[1] * len(fbank))
98
+ if spec_raw.shape[1] < spec_len:
99
+ fbank = np.tile(fbank, (math.ceil(spec_len / spec_raw.shape[1]), 1))
100
+ spec_raw = np.tile(spec_raw, math.ceil(spec_len / spec_raw.shape[1]))
101
+ spec_raw = spec_raw[:, :int(spec_len)]
102
+ fbank = fbank[:fbank_len]
103
+
104
+ feat_len = self.fps * self.duration
105
+ if video_feat.shape[0] < feat_len:
106
+ video_feat = np.tile(video_feat, (math.ceil(feat_len / video_feat.shape[0]), 1))
107
+ video_feat = video_feat[:int(feat_len)]
108
+
109
+ onset_len = 15 * self.duration
110
+ if onset.shape[0] < onset_len:
111
+ onset = np.tile(onset, (math.ceil(onset_len / onset.shape[0])))
112
+ onset = onset[:int(onset_len)]
113
+
114
+ return spec_raw, video_feat, fbank, onset
115
+
116
+
117
+ def mix_audio_and_feat(self, spec1=None, spec2=None, video_feat1=None, video_feat2=None, fbank1=None, fbank2=None, onset1=None, onset2=None, video_info_dict={}, mode='single'):
118
+ """ Return Mix Spec and Mix video feat"""
119
+ if mode == "single":
120
+ # spec1:
121
+ if not self.fix_frames:
122
+ start_idx = random.randint(0, self.sr * self.duration - self.truncate - 1) # audio start
123
+ else:
124
+ start_idx = 0
125
+
126
+ start_frame = int(self.fps * start_idx / self.sr)
127
+ truncate_frame = int(self.fps * self.truncate / self.sr)
128
+
129
+ start_onset = int(15 * start_idx / self.sr)
130
+ truncate_onset = self.onset_truncate
131
+
132
+ # Spec Start & Truncate:
133
+ spec_start = int(start_idx / self.hop_len)
134
+ spec_truncate = int(self.truncate / self.hop_len)
135
+
136
+ # Fbank Start & Truncate:
137
+ fbank_start = int((spec_start / spec1.shape[1]) * len(fbank1))
138
+ fbank_truncate = int((spec_truncate / spec1.shape[1]) * len(fbank1))
139
+
140
+ spec1 = spec1[:, spec_start : spec_start + spec_truncate]
141
+ video_feat1 = video_feat1[start_frame: start_frame + truncate_frame]
142
+ fbank1 = fbank1[fbank_start: fbank_start + fbank_truncate]
143
+ onset1 = onset1[start_onset: start_onset + truncate_onset]
144
+
145
+ # info_dict:
146
+ video_info_dict['video_time1'] = str(start_frame) + '_' + str(start_frame+truncate_frame) # Start frame, end frame
147
+ video_info_dict['video_time2'] = ""
148
+ return spec1, video_feat1, fbank1, onset1, video_info_dict
149
+
150
+ elif mode == "concat":
151
+ total_spec_len = int(self.truncate / self.hop_len)
152
+ # Random Trucate len:
153
+ spec1_truncate_len = random.randint(self.min_duration * self.sr // self.hop_len, total_spec_len - self.min_duration * self.sr // self.hop_len - 1)
154
+ spec2_truncate_len = total_spec_len - spec1_truncate_len
155
+
156
+ # Sample spec clip:
157
+ spec_start1 = random.randint(0, total_spec_len - spec1_truncate_len - 1)
158
+ spec_start2 = random.randint(0, total_spec_len - spec2_truncate_len - 1)
159
+ spec_end1, spec_end2 = spec_start1 + spec1_truncate_len, spec_start2 + spec2_truncate_len
160
+
161
+ start1_fbank, truncate1_fbank = int((spec_start1 / spec1.shape[1]) * len(fbank1)), int((spec1_truncate_len / spec1.shape[1]) * len(fbank1))
162
+ start2_fbank, truncate2_fbank = int((spec_start2 / spec2.shape[1]) * len(fbank2)), int((spec2_truncate_len / spec2.shape[1]) * len(fbank2))
163
+
164
+ # concat spec:
165
+ spec1, spec2 = spec1[:, spec_start1 : spec_end1], spec2[:, spec_start2 : spec_end2]
166
+ concat_audio_spec = np.concatenate([spec1, spec2], axis=1)
167
+
168
+ # Concat Video Feat:
169
+ start1_frame, truncate1_frame = int(self.fps * spec_start1 * self.hop_len / self.sr), int(self.fps * spec1_truncate_len * self.hop_len / self.sr)
170
+ start2_frame, truncate2_frame = int(self.fps * spec_start2 * self.hop_len / self.sr), int(self.fps * self.truncate / self.sr) - truncate1_frame
171
+ video_feat1, video_feat2 = video_feat1[start1_frame : start1_frame + truncate1_frame], video_feat2[start2_frame : start2_frame + truncate2_frame]
172
+ concat_video_feat = np.concatenate([video_feat1, video_feat2])
173
+
174
+ # Concat Fbank:
175
+ fbank1, fbank2 = fbank1[start1_fbank : start1_fbank + truncate1_fbank], fbank2[start2_fbank : start2_fbank + truncate2_fbank]
176
+ concat_fbank = np.concatenate([fbank1, fbank2])
177
+
178
+ # Concat Onset:
179
+ start1_onset, truncate1_onset = int(15 * spec_start1 * self.hop_len / self.sr), int(15 * spec1_truncate_len * self.hop_len / self.sr)
180
+ start2_onset, truncate2_onset = int(15 * spec_start2 * self.hop_len / self.sr), self.onset_truncate - truncate1_onset
181
+ onset_feat1, onset_feat2 = onset1[start1_onset : start1_onset + truncate1_onset], onset2[start2_onset : start2_onset + truncate2_onset]
182
+ concat_onset = np.concatenate([onset_feat1, onset_feat2])
183
+
184
+ video_info_dict['video_time1'] = str(start1_frame) + '_' + str(start1_frame+truncate1_frame) # Start frame, end frame
185
+ video_info_dict['video_time2'] = str(start2_frame) + '_' + str(start2_frame+truncate2_frame)
186
+ return concat_audio_spec, concat_video_feat, concat_fbank, concat_onset, video_info_dict
187
+
188
+
189
+
190
+ def __getitem__(self, idx):
191
+ audio_name1 = self.data_list[idx]
192
+ spec_npy_path1 = self.spec_list[idx]
193
+ video_feat_path1 = self.feat_list[idx]
194
+ fbank_path1 = self.fbank_list[idx]
195
+ onset_path1 = self.onset_list[idx]
196
+
197
+
198
+ # select other video:
199
+ flag = False
200
+ if random.uniform(0, 1) < 0.5:
201
+ flag = True
202
+ random_idx = idx
203
+ while random_idx == idx:
204
+ random_idx = random.randint(0, len(self.data_list)-1)
205
+ audio_name2 = self.data_list[random_idx]
206
+ spec_npy_path2 = self.spec_list[random_idx]
207
+ video_feat_path2 = self.feat_list[random_idx]
208
+ fbank_path2 = self.fbank_list[random_idx]
209
+ onset_path2 = self.onset_list[random_idx]
210
+
211
+
212
+ # Load the Spec and Feat:
213
+ spec1, video_feat1, fbank1, onset1 = self.load_spec_and_feat(spec_npy_path1, video_feat_path1, fbank_path1, onset_path1)
214
+
215
+ if flag:
216
+ spec2, video_feat2, fbank2, onset2 = self.load_spec_and_feat(spec_npy_path2, video_feat_path2, fbank_path2, onset_path2)
217
+ video_info_dict = {'audio_name1':audio_name1, 'audio_name2': audio_name2}
218
+ mix_spec, mix_video_feat, mix_fbank, mix_onset, mix_info = self.mix_audio_and_feat(spec1, spec2, video_feat1, video_feat2, fbank1, fbank2, onset1, onset2, video_info_dict, mode='concat')
219
+ else:
220
+ video_info_dict = {'audio_name1':audio_name1, 'audio_name2': ""}
221
+ mix_spec, mix_video_feat, mix_fbank, mix_onset, mix_info = self.mix_audio_and_feat(spec1=spec1, video_feat1=video_feat1, fbank1=fbank1, onset1=onset1, video_info_dict=video_info_dict, mode='single')
222
+
223
+
224
+ norm_mean = -4.268
225
+ norm_std = 4.569
226
+ target_length = 1024
227
+ n_frames = mix_fbank.shape[0]
228
+ mix_fbank = torch.from_numpy(mix_fbank).contiguous()
229
+ diff = target_length - n_frames
230
+ if diff > 0:
231
+ m = torch.nn.ZeroPad2d((0, 0, 0, diff))
232
+ mix_fbank = m(mix_fbank)
233
+ mix_fbank[n_frames:] = (mix_fbank[n_frames:] - norm_mean) / (norm_std * 2)
234
+ elif diff < 0:
235
+ mix_fbank = mix_fbank[0:target_length, :]
236
+
237
+ mix_spec = mix_spec[None]
238
+ mix_spec = torch.from_numpy(mix_spec).contiguous()
239
+ mix_video_feat = torch.from_numpy(mix_video_feat).contiguous()
240
+ mix_onset = torch.from_numpy(mix_onset).contiguous()
241
+
242
+ data_dict = {}
243
+ data_dict['mix_spec'] = mix_spec
244
+ data_dict['mix_video_feat'] = mix_video_feat
245
+ data_dict['mix_fbank'] = mix_fbank
246
+ data_dict['mix_onset'] = mix_onset
247
+ data_dict['mix_info_dict'] = mix_info
248
+ return data_dict
249
+
250
+
251
+
252
+ class audio_video_spec_fullset_Dataset_Train(audio_video_spec_fullset_Dataset):
253
+ def __init__(self, data_dir):
254
+ super().__init__(split='train', data_dir=data_dir)
255
+
256
+
257
+
258
+ def collate_fn_taro(data):
259
+ mix_spec = torch.stack([example["mix_spec"] for example in data])
260
+ mix_video_feat = torch.stack([example["mix_video_feat"] for example in data])
261
+ mix_fbank = torch.stack([example["mix_fbank"] for example in data])
262
+ mix_onset = torch.stack([example["mix_onset"] for example in data])
263
+ mix_info_dict = [example["mix_info_dict"] for example in data]
264
+
265
+ return {
266
+ "mix_spec": mix_spec,
267
+ "mix_video_feat": mix_video_feat,
268
+ "mix_fbank": mix_fbank,
269
+ "mix_onset": mix_onset,
270
+ "mix_info_dict": mix_info_dict,
271
+ }
272
+
273
+
infer.py ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import os
3
+ import numpy as np
4
+ import random
5
+ import soundfile as sf
6
+ import ffmpeg
7
+
8
+ from argparse import ArgumentParser
9
+ from diffusers import AudioLDM2Pipeline
10
+ from models import MMDiT
11
+ from samplers import euler_sampler, euler_maruyama_sampler
12
+ from cavp_util import Extract_CAVP_Features
13
+ from onset_util import VideoOnsetNet, extract_onset
14
+
15
+ def set_global_seed(seed):
16
+ np.random.seed(seed % (2**32))
17
+ random.seed(seed)
18
+ torch.manual_seed(seed)
19
+ torch.cuda.manual_seed(seed)
20
+ torch.backends.cudnn.deterministic = True
21
+
22
+ def main():
23
+ parser = ArgumentParser(description="Inference script parameters")
24
+ parser.add_argument("--video_path", type=str, default="./test.mp4", required=True, help="Path to the input video file")
25
+ parser.add_argument("--save_folder_path", type=str, default="./output", help="Folder to save output files")
26
+ parser.add_argument("--cavp_config_path", type=str, default="./cavp.yaml", help="Path to CAVP config file")
27
+ parser.add_argument("--cavp_ckpt_path", type=str, default="./cavp_epoch66.ckpt", help="Path to CAVP checkpoint file")
28
+ parser.add_argument("--onset_ckpt_path", type=str, default="./onset_model.ckpt", help="Path to onset model checkpoint file")
29
+ parser.add_argument("--model_ckpt_path", type=str, default="./taro_ckpt.pt", help="Path to MMDiT model checkpoint file")
30
+
31
+ args = parser.parse_args()
32
+ os.makedirs(args.save_folder_path, exist_ok=True)
33
+
34
+ seed = 0
35
+ set_global_seed(seed)
36
+ torch.set_grad_enabled(False)
37
+ device = "cuda" if torch.cuda.is_available() else "cpu"
38
+ weight_dtype = torch.bfloat16
39
+
40
+ # Load models
41
+ extract_cavp = Extract_CAVP_Features(device=device, config_path=args.cavp_config_path, ckpt_path=args.cavp_ckpt_path)
42
+
43
+ # Load the pre-trained onset detection model
44
+ state_dict = torch.load(args.onset_ckpt_path)["state_dict"]
45
+ new_state_dict = {}
46
+ for key, value in state_dict.items():
47
+ if "model.net.model" in key:
48
+ new_key = key.replace("model.net.model", "net.model") # Adjust the key as needed
49
+ elif "model.fc." in key:
50
+ new_key = key.replace("model.fc", "fc") # Adjust the key as needed
51
+ new_state_dict[new_key] = value
52
+ onset_model = VideoOnsetNet(False).to(device)
53
+ onset_model.load_state_dict(new_state_dict)
54
+ onset_model.eval()
55
+
56
+ model = MMDiT(
57
+ adm_in_channels=120,
58
+ z_dims = [768],
59
+ encoder_depth=4,
60
+ ).to(device)
61
+
62
+ state_dict = torch.load(args.model_ckpt_path, map_location=device)['ema']
63
+ model.load_state_dict(state_dict)
64
+ model.eval()
65
+ model.to(weight_dtype)
66
+ model_audioldm = AudioLDM2Pipeline.from_pretrained("cvssp/audioldm2")
67
+ vae = model_audioldm.vae.to(device)
68
+ vae.eval()
69
+
70
+ vocoder = model_audioldm.vocoder.to(device)
71
+
72
+ # Extract Features
73
+ video_name = os.path.basename(args.video_path).split(".")[0]
74
+
75
+ cavp_feats = extract_cavp(args.video_path, tmp_path=args.save_folder_path)
76
+ onset_feats = extract_onset(args.video_path, onset_model, tmp_path=args.save_folder_path, device=device)
77
+
78
+ # Parameters for inference
79
+ sr = 16000
80
+ truncate = 131072
81
+ fps = 4
82
+
83
+ truncate_frame = int(fps * truncate / sr)
84
+ truncate_onset = 120
85
+
86
+ cfg_scale = 8
87
+ mode = "sde"
88
+ num_steps = 25
89
+ heun = False
90
+ guidance_low = 0.0
91
+ guidance_high = 0.7
92
+ path_type = "linear"
93
+
94
+ latent_size = (204, 16)
95
+ latents_scale = torch.tensor(
96
+ [0.18215, 0.18215, 0.18215, 0.18215, 0.18215, 0.18215, 0.18215, 0.18215]
97
+ ).view(1, 8, 1, 1).to(device)
98
+
99
+ # Start inference
100
+ video_feats = torch.from_numpy(cavp_feats[:truncate_frame]).unsqueeze(0).to(device).to(weight_dtype)
101
+ onset_feats = torch.from_numpy(onset_feats[:truncate_onset]).unsqueeze(0).to(device).to(weight_dtype)
102
+
103
+ z = torch.randn(len(video_feats), model.in_channels, latent_size[0], latent_size[1], device=device).to(weight_dtype)
104
+
105
+ # Sample audios
106
+ sampling_kwargs = dict(
107
+ model=model,
108
+ latents=z,
109
+ y=onset_feats,
110
+ context=video_feats,
111
+ num_steps=num_steps,
112
+ heun=heun,
113
+ cfg_scale=cfg_scale,
114
+ guidance_low=guidance_low,
115
+ guidance_high=guidance_high,
116
+ path_type=path_type,
117
+ )
118
+
119
+ with torch.no_grad():
120
+ if mode == "sde":
121
+ samples = euler_maruyama_sampler(**sampling_kwargs)
122
+ elif mode == "ode":
123
+ samples = euler_sampler(**sampling_kwargs)
124
+ else:
125
+ raise NotImplementedError()
126
+
127
+ samples = vae.decode(samples / latents_scale).sample
128
+ wav_samples = vocoder(samples.squeeze()).detach().cpu().numpy()
129
+
130
+ # Save the audio
131
+ sf.write(os.path.join(args.save_folder_path, video_name + ".wav"), wav_samples, sr)
132
+
133
+ # Save the video with the generated audio
134
+ trimmed_video_file_path = os.path.join(args.save_folder_path, video_name + "_trimmed.mp4")
135
+ trimmed_audio_file_path = os.path.join(args.save_folder_path, video_name + ".wav")
136
+ output_path = os.path.join(args.save_folder_path, video_name + "_wa.mp4")
137
+
138
+ # Trim the video to match the audio duration
139
+ ffmpeg.input(args.video_path, ss=0, t=truncate / sr).output(trimmed_video_file_path, vcodec='libx264', an=None).run(overwrite_output=True)
140
+
141
+ # Combine trimmed video and generated audio
142
+ input_video = ffmpeg.input(trimmed_video_file_path)
143
+ input_audio = ffmpeg.input(trimmed_audio_file_path)
144
+ ffmpeg.output(input_video, input_audio, output_path, vcodec='libx264', acodec='aac', strict='experimental').run(overwrite_output=True)
145
+ os.remove(trimmed_video_file_path)
146
+
147
+ print("========================================FINISH INFERENCE===========================================")
148
+
149
+ if __name__ == "__main__":
150
+ main()
151
+
loss.py ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import numpy as np
3
+ import torch.nn.functional as F
4
+
5
+ def mean_flat(x):
6
+ """
7
+ Take the mean over all non-batch dimensions.
8
+ """
9
+ return torch.mean(x, dim=list(range(1, len(x.size()))))
10
+
11
+ def sum_flat(x):
12
+ """
13
+ Take the mean over all non-batch dimensions.
14
+ """
15
+ return torch.sum(x, dim=list(range(1, len(x.size()))))
16
+
17
+ class SILoss:
18
+ def __init__(
19
+ self,
20
+ prediction='v',
21
+ path_type="linear",
22
+ weighting="uniform",
23
+ encoders=[],
24
+ accelerator=None,
25
+ latents_scale=None,
26
+ latents_bias=None,
27
+ ):
28
+ self.prediction = prediction
29
+ self.weighting = weighting
30
+ self.path_type = path_type
31
+ self.encoders = encoders
32
+ self.accelerator = accelerator
33
+ self.latents_scale = latents_scale
34
+ self.latents_bias = latents_bias
35
+
36
+ def interpolant(self, t):
37
+ if self.path_type == "linear":
38
+ alpha_t = 1 - t
39
+ sigma_t = t
40
+ d_alpha_t = -1
41
+ d_sigma_t = 1
42
+ elif self.path_type == "cosine":
43
+ alpha_t = torch.cos(t * np.pi / 2)
44
+ sigma_t = torch.sin(t * np.pi / 2)
45
+ d_alpha_t = -np.pi / 2 * torch.sin(t * np.pi / 2)
46
+ d_sigma_t = np.pi / 2 * torch.cos(t * np.pi / 2)
47
+ else:
48
+ raise NotImplementedError()
49
+
50
+ return alpha_t, sigma_t, d_alpha_t, d_sigma_t
51
+
52
+ def __call__(self, model, images, model_kwargs=None, zs=None):
53
+ if model_kwargs == None:
54
+ model_kwargs = {}
55
+ # sample timesteps
56
+ if self.weighting == "uniform":
57
+ time_input = torch.rand((images.shape[0], 1, 1, 1))
58
+ elif self.weighting == "lognormal":
59
+ # sample timestep according to log-normal distribution of sigmas following EDM
60
+ rnd_normal = torch.randn((images.shape[0], 1 ,1, 1))
61
+ sigma = rnd_normal.exp()
62
+ if self.path_type == "linear":
63
+ time_input = sigma / (1 + sigma)
64
+ elif self.path_type == "cosine":
65
+ time_input = 2 / np.pi * torch.atan(sigma)
66
+
67
+ time_input = time_input.to(device=images.device, dtype=images.dtype)
68
+
69
+ noises = torch.randn_like(images)
70
+ alpha_t, sigma_t, d_alpha_t, d_sigma_t = self.interpolant(time_input)
71
+
72
+ model_input = alpha_t * images + sigma_t * noises
73
+ if self.prediction == 'v':
74
+ model_target = d_alpha_t * images + d_sigma_t * noises
75
+ else:
76
+ raise NotImplementedError() # TODO: add x or eps prediction
77
+ model_output, zs_tilde = model(model_input, time_input.flatten(), **model_kwargs)
78
+ denoising_loss = mean_flat((model_output - model_target) ** 2)
79
+
80
+ epsilon = 1e-8
81
+ t_weight = torch.sigmoid(torch.log(alpha_t / (sigma_t + epsilon))).squeeze()
82
+
83
+ # projection loss
84
+ proj_loss = 0.
85
+ if len(zs) > 0:
86
+ bsz = zs[0].shape[0]
87
+ for i, (z, z_tilde) in enumerate(zip(zs, zs_tilde)):
88
+ for j, (z_j, z_tilde_j) in enumerate(zip(z, z_tilde)):
89
+ z_tilde_j = torch.nn.functional.normalize(z_tilde_j, dim=-1)
90
+ z_j = torch.nn.functional.normalize(z_j, dim=-1)
91
+ proj_loss += mean_flat(-(z_j * z_tilde_j).sum(dim=-1)) * t_weight[j]
92
+ proj_loss /= (len(zs) * bsz)
93
+
94
+ return denoising_loss, proj_loss
models.py ADDED
@@ -0,0 +1,747 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import math
2
+ from typing import Dict, Optional
3
+ import numpy as np
4
+ import torch
5
+ import torch.nn as nn
6
+ from einops import rearrange, repeat
7
+
8
+ import torch, math
9
+ from torch import nn
10
+
11
+ def prob_mask_like(shape, prob, device):
12
+ if prob == 1:
13
+ return torch.ones(shape, device = device, dtype = torch.bool)
14
+ elif prob == 0:
15
+ return torch.zeros(shape, device = device, dtype = torch.bool)
16
+ else:
17
+ return torch.zeros(shape, device = device).float().uniform_(0, 1) < prob
18
+
19
+
20
+ def attention(q, k, v, heads, mask=None):
21
+ """Convenience wrapper around a basic attention operation"""
22
+ b, _, dim_head = q.shape
23
+ dim_head //= heads
24
+ q, k, v = map(lambda t: t.view(b, -1, heads, dim_head).transpose(1, 2), (q, k, v))
25
+ out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
26
+ return out.transpose(1, 2).reshape(b, -1, heads * dim_head)
27
+
28
+
29
+ class Mlp(nn.Module):
30
+ """ MLP as used in Vision Transformer, MLP-Mixer and related networks"""
31
+ def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, bias=True, dtype=None, device=None):
32
+ super().__init__()
33
+ out_features = out_features or in_features
34
+ hidden_features = hidden_features or in_features
35
+
36
+ self.fc1 = nn.Linear(in_features, hidden_features, bias=bias, dtype=dtype, device=device)
37
+ self.act = act_layer
38
+ self.fc2 = nn.Linear(hidden_features, out_features, bias=bias, dtype=dtype, device=device)
39
+
40
+ def forward(self, x):
41
+ x = self.fc1(x)
42
+ x = self.act(x)
43
+ x = self.fc2(x)
44
+ return x
45
+
46
+ def build_mlp(hidden_size, projector_dim, z_dim):
47
+ return nn.Sequential(
48
+ nn.Conv1d(in_channels=816, out_channels=416, kernel_size=1),
49
+ nn.SiLU(),
50
+ nn.Linear(hidden_size, projector_dim),
51
+ nn.SiLU(),
52
+ nn.Linear(projector_dim, projector_dim),
53
+ nn.SiLU(),
54
+ nn.Linear(projector_dim, z_dim),
55
+ )
56
+
57
+ class PatchEmbed(nn.Module):
58
+ """ 2D Image to Patch Embedding"""
59
+ def __init__(
60
+ self,
61
+ img_size: Optional[int] = 224,
62
+ patch_size: int = 16,
63
+ in_chans: int = 3,
64
+ embed_dim: int = 768,
65
+ flatten: bool = True,
66
+ bias: bool = True,
67
+ strict_img_size: bool = True,
68
+ dynamic_img_pad: bool = False,
69
+ dtype=None,
70
+ device=None,
71
+ ):
72
+ super().__init__()
73
+ self.patch_size = patch_size
74
+ if img_size is not None:
75
+ self.img_size = img_size
76
+ self.grid_size = (img_size[0] // patch_size[0], img_size[1] // patch_size[1])
77
+ self.num_patches = self.grid_size[0] * self.grid_size[1]
78
+ else:
79
+ self.img_size = None
80
+ self.grid_size = None
81
+ self.num_patches = None
82
+
83
+ # flatten spatial dim and transpose to channels last, kept for bwd compat
84
+ self.flatten = flatten
85
+ self.strict_img_size = strict_img_size
86
+ self.dynamic_img_pad = dynamic_img_pad
87
+
88
+ self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size, bias=bias, dtype=dtype, device=device)
89
+
90
+ def forward(self, x):
91
+ B, C, H, W = x.shape
92
+ x = self.proj(x)
93
+ if self.flatten:
94
+ x = x.flatten(2).transpose(1, 2) # NCHW -> NLC
95
+ return x
96
+
97
+
98
+ def modulate(x, shift, scale):
99
+ if shift is None:
100
+ shift = torch.zeros_like(scale)
101
+ return x * (1 + scale.unsqueeze(1)) + shift.unsqueeze(1)
102
+
103
+
104
+ #################################################################################
105
+ # Sine/Cosine Positional Embedding Functions #
106
+ #################################################################################
107
+
108
+
109
+ def get_2d_sincos_pos_embed(embed_dim, grid_size_1, grid_size_2, cls_token=False, extra_tokens=0, scaling_factor=None, offset=None):
110
+ """
111
+ grid_size: int of the grid height and width
112
+ return:
113
+ pos_embed: [grid_size*grid_size, embed_dim] or [1+grid_size*grid_size, embed_dim] (w/ or w/o cls_token)
114
+ """
115
+ grid_h = np.arange(grid_size_1, dtype=np.float32)
116
+ grid_w = np.arange(grid_size_2, dtype=np.float32)
117
+ grid = np.meshgrid(grid_w, grid_h) # here w goes first
118
+ grid = np.stack(grid, axis=0)
119
+ if scaling_factor is not None:
120
+ grid = grid / scaling_factor
121
+ if offset is not None:
122
+ grid = grid - offset
123
+ grid = grid.reshape([2, 1, grid_size_1, grid_size_2])
124
+ pos_embed = get_2d_sincos_pos_embed_from_grid(embed_dim, grid)
125
+ if cls_token and extra_tokens > 0:
126
+ pos_embed = np.concatenate([np.zeros([extra_tokens, embed_dim]), pos_embed], axis=0)
127
+ return pos_embed
128
+
129
+
130
+ def get_2d_sincos_pos_embed_from_grid(embed_dim, grid):
131
+ assert embed_dim % 2 == 0
132
+ # use half of dimensions to encode grid_h
133
+ emb_h = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[0]) # (H*W, D/2)
134
+ emb_w = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[1]) # (H*W, D/2)
135
+ emb = np.concatenate([emb_h, emb_w], axis=1) # (H*W, D)
136
+ return emb
137
+
138
+
139
+ def get_1d_sincos_pos_embed_from_grid(embed_dim, pos):
140
+ """
141
+ embed_dim: output dimension for each position
142
+ pos: a list of positions to be encoded: size (M,)
143
+ out: (M, D)
144
+ """
145
+ assert embed_dim % 2 == 0
146
+ omega = np.arange(embed_dim // 2, dtype=np.float64)
147
+ omega /= embed_dim / 2.0
148
+ omega = 1.0 / 10000**omega # (D/2,)
149
+ pos = pos.reshape(-1) # (M,)
150
+ out = np.einsum("m,d->md", pos, omega) # (M, D/2), outer product
151
+ emb_sin = np.sin(out) # (M, D/2)
152
+ emb_cos = np.cos(out) # (M, D/2)
153
+ return np.concatenate([emb_sin, emb_cos], axis=1) # (M, D)
154
+
155
+
156
+ #################################################################################
157
+ # Embedding Layers for Timesteps and Class Labels #
158
+ #################################################################################
159
+
160
+
161
+ class TimestepEmbedder(nn.Module):
162
+ """Embeds scalar timesteps into vector representations."""
163
+
164
+ def __init__(self, hidden_size, frequency_embedding_size=256, dtype=None, device=None):
165
+ super().__init__()
166
+ self.mlp = nn.Sequential(
167
+ nn.Linear(frequency_embedding_size, hidden_size, bias=True, dtype=dtype, device=device),
168
+ nn.SiLU(),
169
+ nn.Linear(hidden_size, hidden_size, bias=True, dtype=dtype, device=device),
170
+ )
171
+ self.frequency_embedding_size = frequency_embedding_size
172
+
173
+ @staticmethod
174
+ def timestep_embedding(t, dim, max_period=10000):
175
+ """
176
+ Create sinusoidal timestep embeddings.
177
+ :param t: a 1-D Tensor of N indices, one per batch element.
178
+ These may be fractional.
179
+ :param dim: the dimension of the output.
180
+ :param max_period: controls the minimum frequency of the embeddings.
181
+ :return: an (N, D) Tensor of positional embeddings.
182
+ """
183
+ half = dim // 2
184
+ freqs = torch.exp(
185
+ -math.log(max_period)
186
+ * torch.arange(start=0, end=half, dtype=torch.float32)
187
+ / half
188
+ ).to(device=t.device)
189
+ args = t[:, None].float() * freqs[None]
190
+ embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1)
191
+ if dim % 2:
192
+ embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1)
193
+ if torch.is_floating_point(t):
194
+ embedding = embedding.to(dtype=t.dtype)
195
+ return embedding
196
+
197
+ def forward(self, t, dtype, **kwargs):
198
+ t_freq = self.timestep_embedding(t, self.frequency_embedding_size).to(dtype)
199
+ t_emb = self.mlp(t_freq)
200
+ return t_emb
201
+
202
+
203
+ class VectorEmbedder(nn.Module):
204
+ """Embeds a flat vector of dimension input_dim"""
205
+
206
+ def __init__(self, input_dim: int, hidden_size: int, dtype=None, device=None):
207
+ super().__init__()
208
+ self.mlp = nn.Sequential(
209
+ nn.Linear(input_dim, hidden_size, bias=True, dtype=dtype, device=device),
210
+ nn.SiLU(),
211
+ nn.Linear(hidden_size, hidden_size, bias=True, dtype=dtype, device=device),
212
+ )
213
+
214
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
215
+ return self.mlp(x)
216
+
217
+
218
+ #################################################################################
219
+ # Core DiT Model #
220
+ #################################################################################
221
+
222
+
223
+ def split_qkv(qkv, head_dim):
224
+ qkv = qkv.reshape(qkv.shape[0], qkv.shape[1], 3, -1, head_dim).movedim(2, 0)
225
+ return qkv[0], qkv[1], qkv[2]
226
+
227
+ def optimized_attention(qkv, num_heads):
228
+ return attention(qkv[0], qkv[1], qkv[2], num_heads)
229
+
230
+ class SelfAttention(nn.Module):
231
+ ATTENTION_MODES = ("xformers", "torch", "torch-hb", "math", "debug")
232
+
233
+ def __init__(
234
+ self,
235
+ dim: int,
236
+ num_heads: int = 8,
237
+ qkv_bias: bool = False,
238
+ qk_scale: Optional[float] = None,
239
+ attn_mode: str = "xformers",
240
+ pre_only: bool = False,
241
+ qk_norm: Optional[str] = None,
242
+ rmsnorm: bool = False,
243
+ dtype=None,
244
+ device=None,
245
+ ):
246
+ super().__init__()
247
+ self.num_heads = num_heads
248
+ self.head_dim = dim // num_heads
249
+
250
+ self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias, dtype=dtype, device=device)
251
+ if not pre_only:
252
+ self.proj = nn.Linear(dim, dim, dtype=dtype, device=device)
253
+ assert attn_mode in self.ATTENTION_MODES
254
+ self.attn_mode = attn_mode
255
+ self.pre_only = pre_only
256
+
257
+ if qk_norm == "rms":
258
+ self.ln_q = RMSNorm(self.head_dim, elementwise_affine=True, eps=1.0e-6, dtype=dtype, device=device)
259
+ self.ln_k = RMSNorm(self.head_dim, elementwise_affine=True, eps=1.0e-6, dtype=dtype, device=device)
260
+ elif qk_norm == "ln":
261
+ self.ln_q = nn.LayerNorm(self.head_dim, elementwise_affine=True, eps=1.0e-6, dtype=dtype, device=device)
262
+ self.ln_k = nn.LayerNorm(self.head_dim, elementwise_affine=True, eps=1.0e-6, dtype=dtype, device=device)
263
+ elif qk_norm is None:
264
+ self.ln_q = nn.Identity()
265
+ self.ln_k = nn.Identity()
266
+ else:
267
+ raise ValueError(qk_norm)
268
+
269
+ def pre_attention(self, x: torch.Tensor):
270
+ B, L, C = x.shape
271
+ qkv = self.qkv(x)
272
+ q, k, v = split_qkv(qkv, self.head_dim)
273
+ q = self.ln_q(q).reshape(q.shape[0], q.shape[1], -1)
274
+ k = self.ln_k(k).reshape(q.shape[0], q.shape[1], -1)
275
+ return (q, k, v)
276
+
277
+ def post_attention(self, x: torch.Tensor) -> torch.Tensor:
278
+ assert not self.pre_only
279
+ x = self.proj(x)
280
+ return x
281
+
282
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
283
+ (q, k, v) = self.pre_attention(x)
284
+ x = attention(q, k, v, self.num_heads)
285
+ x = self.post_attention(x)
286
+ return x
287
+
288
+
289
+ class RMSNorm(torch.nn.Module):
290
+ def __init__(
291
+ self, dim: int, elementwise_affine: bool = False, eps: float = 1e-6, device=None, dtype=None
292
+ ):
293
+ """
294
+ Initialize the RMSNorm normalization layer.
295
+ Args:
296
+ dim (int): The dimension of the input tensor.
297
+ eps (float, optional): A small value added to the denominator for numerical stability. Default is 1e-6.
298
+ Attributes:
299
+ eps (float): A small value added to the denominator for numerical stability.
300
+ weight (nn.Parameter): Learnable scaling parameter.
301
+ """
302
+ super().__init__()
303
+ self.eps = eps
304
+ self.learnable_scale = elementwise_affine
305
+ if self.learnable_scale:
306
+ self.weight = nn.Parameter(torch.empty(dim, device=device, dtype=dtype))
307
+ else:
308
+ self.register_parameter("weight", None)
309
+
310
+ def _norm(self, x):
311
+ """
312
+ Apply the RMSNorm normalization to the input tensor.
313
+ Args:
314
+ x (torch.Tensor): The input tensor.
315
+ Returns:
316
+ torch.Tensor: The normalized tensor.
317
+ """
318
+ return x * torch.rsqrt(x.pow(2).mean(-1, keepdim=True) + self.eps)
319
+
320
+ def forward(self, x):
321
+ """
322
+ Forward pass through the RMSNorm layer.
323
+ Args:
324
+ x (torch.Tensor): The input tensor.
325
+ Returns:
326
+ torch.Tensor: The output tensor after applying RMSNorm.
327
+ """
328
+ x = self._norm(x)
329
+ if self.learnable_scale:
330
+ return x * self.weight.to(device=x.device, dtype=x.dtype)
331
+ else:
332
+ return x
333
+
334
+
335
+ class SwiGLUFeedForward(nn.Module):
336
+ def __init__(
337
+ self,
338
+ dim: int,
339
+ hidden_dim: int,
340
+ multiple_of: int,
341
+ ffn_dim_multiplier: Optional[float] = None,
342
+ ):
343
+ """
344
+ Initialize the FeedForward module.
345
+
346
+ Args:
347
+ dim (int): Input dimension.
348
+ hidden_dim (int): Hidden dimension of the feedforward layer.
349
+ multiple_of (int): Value to ensure hidden dimension is a multiple of this value.
350
+ ffn_dim_multiplier (float, optional): Custom multiplier for hidden dimension. Defaults to None.
351
+
352
+ Attributes:
353
+ w1 (ColumnParallelLinear): Linear transformation for the first layer.
354
+ w2 (RowParallelLinear): Linear transformation for the second layer.
355
+ w3 (ColumnParallelLinear): Linear transformation for the third layer.
356
+
357
+ """
358
+ super().__init__()
359
+ hidden_dim = int(2 * hidden_dim / 3)
360
+ # custom dim factor multiplier
361
+ if ffn_dim_multiplier is not None:
362
+ hidden_dim = int(ffn_dim_multiplier * hidden_dim)
363
+ hidden_dim = multiple_of * ((hidden_dim + multiple_of - 1) // multiple_of)
364
+
365
+ self.w1 = nn.Linear(dim, hidden_dim, bias=False)
366
+ self.w2 = nn.Linear(hidden_dim, dim, bias=False)
367
+ self.w3 = nn.Linear(dim, hidden_dim, bias=False)
368
+
369
+ def forward(self, x):
370
+ return self.w2(nn.functional.silu(self.w1(x)) * self.w3(x))
371
+
372
+
373
+ class DismantledBlock(nn.Module):
374
+ """A DiT block with gated adaptive layer norm (adaLN) conditioning."""
375
+
376
+ ATTENTION_MODES = ("xformers", "torch", "torch-hb", "math", "debug")
377
+
378
+ def __init__(
379
+ self,
380
+ hidden_size: int,
381
+ num_heads: int,
382
+ mlp_ratio: float = 4.0,
383
+ attn_mode: str = "xformers",
384
+ qkv_bias: bool = False,
385
+ pre_only: bool = False,
386
+ rmsnorm: bool = False,
387
+ scale_mod_only: bool = False,
388
+ swiglu: bool = False,
389
+ qk_norm: Optional[str] = None,
390
+ dtype=None,
391
+ device=None,
392
+ **block_kwargs,
393
+ ):
394
+ super().__init__()
395
+ assert attn_mode in self.ATTENTION_MODES
396
+ if not rmsnorm:
397
+ self.norm1 = nn.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6, dtype=dtype, device=device)
398
+ else:
399
+ self.norm1 = RMSNorm(hidden_size, elementwise_affine=False, eps=1e-6)
400
+ self.attn = SelfAttention(dim=hidden_size, num_heads=num_heads, qkv_bias=qkv_bias, attn_mode=attn_mode, pre_only=pre_only, qk_norm=qk_norm, rmsnorm=rmsnorm, dtype=dtype, device=device)
401
+ if not pre_only:
402
+ if not rmsnorm:
403
+ self.norm2 = nn.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6, dtype=dtype, device=device)
404
+ else:
405
+ self.norm2 = RMSNorm(hidden_size, elementwise_affine=False, eps=1e-6)
406
+ mlp_hidden_dim = int(hidden_size * mlp_ratio)
407
+ if not pre_only:
408
+ if not swiglu:
409
+ self.mlp = Mlp(in_features=hidden_size, hidden_features=mlp_hidden_dim, act_layer=nn.GELU(approximate="tanh"), dtype=dtype, device=device)
410
+ else:
411
+ self.mlp = SwiGLUFeedForward(dim=hidden_size, hidden_dim=mlp_hidden_dim, multiple_of=256)
412
+ self.scale_mod_only = scale_mod_only
413
+ if not scale_mod_only:
414
+ n_mods = 6 if not pre_only else 2
415
+ else:
416
+ n_mods = 4 if not pre_only else 1
417
+ self.adaLN_modulation = nn.Sequential(nn.SiLU(), nn.Linear(hidden_size, n_mods * hidden_size, bias=True, dtype=dtype, device=device))
418
+ self.pre_only = pre_only
419
+
420
+ def pre_attention(self, x: torch.Tensor, c: torch.Tensor):
421
+ assert x is not None, "pre_attention called with None input"
422
+ if not self.pre_only:
423
+ if not self.scale_mod_only:
424
+ shift_msa, scale_msa, gate_msa, shift_mlp, scale_mlp, gate_mlp = self.adaLN_modulation(c).chunk(6, dim=1)
425
+ else:
426
+ shift_msa = None
427
+ shift_mlp = None
428
+ scale_msa, gate_msa, scale_mlp, gate_mlp = self.adaLN_modulation(c).chunk(4, dim=1)
429
+ qkv = self.attn.pre_attention(modulate(self.norm1(x), shift_msa, scale_msa))
430
+ return qkv, (x, gate_msa, shift_mlp, scale_mlp, gate_mlp)
431
+ else:
432
+ if not self.scale_mod_only:
433
+ shift_msa, scale_msa = self.adaLN_modulation(c).chunk(2, dim=1)
434
+ else:
435
+ shift_msa = None
436
+ scale_msa = self.adaLN_modulation(c)
437
+ qkv = self.attn.pre_attention(modulate(self.norm1(x), shift_msa, scale_msa))
438
+ return qkv, None
439
+
440
+ def post_attention(self, attn, x, gate_msa, shift_mlp, scale_mlp, gate_mlp):
441
+ assert not self.pre_only
442
+ x = x + gate_msa.unsqueeze(1) * self.attn.post_attention(attn)
443
+ x = x + gate_mlp.unsqueeze(1) * self.mlp(modulate(self.norm2(x), shift_mlp, scale_mlp))
444
+ return x
445
+
446
+ def forward(self, x: torch.Tensor, c: torch.Tensor) -> torch.Tensor:
447
+ assert not self.pre_only
448
+ (q, k, v), intermediates = self.pre_attention(x, c)
449
+ attn = attention(q, k, v, self.attn.num_heads)
450
+ return self.post_attention(attn, *intermediates)
451
+
452
+
453
+ def block_mixing(context, x, context_block, x_block, c):
454
+ assert context is not None, "block_mixing called with None context"
455
+ context_qkv, context_intermediates = context_block.pre_attention(context, c)
456
+
457
+ x_qkv, x_intermediates = x_block.pre_attention(x, c)
458
+
459
+ o = []
460
+ for t in range(3):
461
+ o.append(torch.cat((context_qkv[t], x_qkv[t]), dim=1))
462
+ q, k, v = tuple(o)
463
+
464
+ attn = attention(q, k, v, x_block.attn.num_heads)
465
+ context_attn, x_attn = (attn[:, : context_qkv[0].shape[1]], attn[:, context_qkv[0].shape[1] :])
466
+
467
+ if not context_block.pre_only:
468
+ context = context_block.post_attention(context_attn, *context_intermediates)
469
+ else:
470
+ context = None
471
+ x = x_block.post_attention(x_attn, *x_intermediates)
472
+ return context, x
473
+
474
+
475
+ class JointBlock(nn.Module):
476
+ """just a small wrapper to serve as a fsdp unit"""
477
+
478
+ def __init__(self, *args, **kwargs):
479
+ super().__init__()
480
+ pre_only = kwargs.pop("pre_only")
481
+ qk_norm = kwargs.pop("qk_norm", None)
482
+ self.context_block = DismantledBlock(*args, pre_only=pre_only, qk_norm=qk_norm, **kwargs)
483
+ self.x_block = DismantledBlock(*args, pre_only=False, qk_norm=qk_norm, **kwargs)
484
+
485
+ def forward(self, *args, **kwargs):
486
+ return block_mixing(*args, context_block=self.context_block, x_block=self.x_block, **kwargs)
487
+
488
+
489
+ class FinalLayer(nn.Module):
490
+ """
491
+ The final layer of DiT.
492
+ """
493
+
494
+ def __init__(self, hidden_size: int, patch_size, out_channels: int, total_out_channels: Optional[int] = None, dtype=None, device=None):
495
+ super().__init__()
496
+ self.norm_final = nn.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6, dtype=dtype, device=device)
497
+ self.linear = (
498
+ nn.Linear(hidden_size, patch_size[0] * patch_size[1] * out_channels, bias=True, dtype=dtype, device=device)
499
+ if (total_out_channels is None)
500
+ else nn.Linear(hidden_size, total_out_channels, bias=True, dtype=dtype, device=device)
501
+ )
502
+ self.adaLN_modulation = nn.Sequential(nn.SiLU(), nn.Linear(hidden_size, 2 * hidden_size, bias=True, dtype=dtype, device=device))
503
+
504
+ def forward(self, x: torch.Tensor, c: torch.Tensor) -> torch.Tensor:
505
+ shift, scale = self.adaLN_modulation(c).chunk(2, dim=1)
506
+ x = modulate(self.norm_final(x), shift, scale)
507
+ x = self.linear(x)
508
+ return x
509
+
510
+
511
+ class MMDiT(nn.Module):
512
+ """Diffusion model with a Transformer backbone."""
513
+
514
+ def __init__(
515
+ self,
516
+ input_size=(204, 16),
517
+ patch_size=(2, 2),
518
+ in_channels: int = 8,
519
+ depth: int = 12,
520
+ mlp_ratio: float = 4.0,
521
+ learn_sigma: bool = False,
522
+ adm_in_channels: Optional[int] = None,
523
+ context_embedder_config: Optional[Dict] = None,
524
+ register_length: int = 0,
525
+ attn_mode: str = "torch",
526
+ rmsnorm: bool = False,
527
+ scale_mod_only: bool = False,
528
+ swiglu: bool = False,
529
+ out_channels: Optional[int] = None,
530
+ pos_embed_scaling_factor: Optional[float] = None,
531
+ pos_embed_offset: Optional[float] = None,
532
+ pos_embed_max_size: Optional[int] = None,
533
+ num_patches = None,
534
+ qk_norm: Optional[str] = None,
535
+ qkv_bias: bool = True,
536
+ dtype = None,
537
+ device = None,
538
+ encoder_depth = 4,
539
+ z_dims=[768],
540
+ projector_dim=2048,
541
+ ):
542
+ super().__init__()
543
+ print(f"mmdit initializing with: {input_size=}, {patch_size=}, {in_channels=}, {depth=}, {mlp_ratio=}, {learn_sigma=}, {adm_in_channels=}, {context_embedder_config=}, {register_length=}, {attn_mode=}, {rmsnorm=}, {scale_mod_only=}, {swiglu=}, {out_channels=}, {pos_embed_scaling_factor=}, {pos_embed_offset=}, {pos_embed_max_size=}, {num_patches=}, {qk_norm=}, {qkv_bias=}, {dtype=}, {device=}")
544
+ self.dtype = dtype
545
+ self.learn_sigma = learn_sigma
546
+ self.in_channels = in_channels
547
+ default_out_channels = in_channels * 2 if learn_sigma else in_channels
548
+ self.out_channels = out_channels if out_channels is not None else default_out_channels
549
+ self.patch_size = patch_size
550
+ self.pos_embed_scaling_factor = pos_embed_scaling_factor
551
+ self.pos_embed_offset = pos_embed_offset
552
+ self.pos_embed_max_size = pos_embed_max_size = (102, 8)
553
+
554
+
555
+ # apply magic --> this defines a head_size of 64
556
+ hidden_size = 64 * depth
557
+ # hidden_size = 32 * depth
558
+ num_heads = depth
559
+
560
+ self.num_heads = num_heads
561
+
562
+ self.x_embedder = PatchEmbed(input_size, patch_size, in_channels, hidden_size, bias=True, strict_img_size=self.pos_embed_max_size is None, dtype=dtype, device=device)
563
+ self.t_embedder = TimestepEmbedder(hidden_size, dtype=dtype, device=device)
564
+
565
+ if adm_in_channels is not None:
566
+ assert isinstance(adm_in_channels, int)
567
+ self.y_embedder = VectorEmbedder(adm_in_channels, hidden_size, dtype=dtype, device=device)
568
+ else:
569
+ self.y_embedder = None
570
+
571
+ self.context_embedder = nn.Identity()
572
+ # TODO: hand coded
573
+ context_embedder_config = {"params": {"in_features": 512, "out_features": hidden_size}, "target": "torch.nn.Linear"}
574
+ if context_embedder_config is not None:
575
+ if context_embedder_config["target"] == "torch.nn.Linear":
576
+ self.context_embedder = nn.Linear(**context_embedder_config["params"], dtype=dtype, device=device)
577
+
578
+ self.register_length = register_length
579
+ if self.register_length > 0:
580
+ self.register = nn.Parameter(torch.randn(1, register_length, hidden_size, dtype=dtype, device=device))
581
+
582
+ num_patches = self.x_embedder.num_patches
583
+ # Will use fixed sin-cos embedding:
584
+ # just use a buffer already
585
+ if num_patches is not None:
586
+ self.register_buffer(
587
+ "pos_embed",
588
+ torch.zeros(1, num_patches, hidden_size, dtype=dtype, device=device),
589
+ )
590
+ else:
591
+ self.pos_embed = None
592
+
593
+ self.joint_blocks = nn.ModuleList(
594
+ [
595
+ JointBlock(hidden_size, num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, attn_mode=attn_mode, pre_only=i == depth - 1, rmsnorm=rmsnorm, scale_mod_only=scale_mod_only, swiglu=swiglu, qk_norm=qk_norm, dtype=dtype, device=device)
596
+ for i in range(depth)
597
+ ]
598
+ )
599
+
600
+ self.final_layer = FinalLayer(hidden_size, patch_size, self.out_channels, dtype=dtype, device=device)
601
+
602
+ # REPA
603
+ self.encoder_depth = encoder_depth
604
+ self.projectors = nn.ModuleList([
605
+ build_mlp(hidden_size, projector_dim, z_dim) for z_dim in z_dims
606
+ ])
607
+
608
+ # Initialize (and freeze) pos_embed by sin-cos embedding:
609
+ grid_size_1 = 102
610
+ grid_size_2 = 8
611
+ pos_embed = get_2d_sincos_pos_embed(
612
+ self.pos_embed.shape[-1], grid_size_1, grid_size_2
613
+ )
614
+ self.pos_embed.data.copy_(torch.from_numpy(pos_embed).float().unsqueeze(0))
615
+
616
+ self.initialize_weights()
617
+
618
+ def initialize_weights(self):
619
+ # Initialize transformer layers:
620
+ def _basic_init(module):
621
+ if isinstance(module, nn.Linear):
622
+ torch.nn.init.xavier_uniform_(module.weight)
623
+ if module.bias is not None:
624
+ nn.init.constant_(module.bias, 0)
625
+ self.apply(_basic_init)
626
+
627
+ # Initialize timestep embedding MLP:
628
+ nn.init.normal_(self.t_embedder.mlp[0].weight, std=0.02)
629
+ nn.init.normal_(self.t_embedder.mlp[2].weight, std=0.02)
630
+
631
+ nn.init.normal_(self.context_embedder.weight, std=0.02)
632
+
633
+ if self.y_embedder is not None:
634
+ nn.init.normal_(self.y_embedder.mlp[0].weight, std=0.02)
635
+ nn.init.normal_(self.y_embedder.mlp[2].weight, std=0.02)
636
+
637
+ # Zero-out adaLN modulation layers in DiT blocks:
638
+ for block in self.joint_blocks:
639
+ nn.init.constant_(block.context_block.adaLN_modulation[-1].weight, 0)
640
+ nn.init.constant_(block.context_block.adaLN_modulation[-1].bias, 0)
641
+ nn.init.constant_(block.x_block.adaLN_modulation[-1].weight, 0)
642
+ nn.init.constant_(block.x_block.adaLN_modulation[-1].bias, 0)
643
+
644
+ # Zero-out output layers:
645
+ nn.init.constant_(self.final_layer.adaLN_modulation[-1].weight, 0)
646
+ nn.init.constant_(self.final_layer.adaLN_modulation[-1].bias, 0)
647
+ nn.init.constant_(self.final_layer.linear.weight, 0)
648
+ nn.init.constant_(self.final_layer.linear.bias, 0)
649
+
650
+ def cropped_pos_embed(self, hw):
651
+ assert self.pos_embed_max_size is not None
652
+ p1, p2 = self.x_embedder.patch_size
653
+ h, w = hw
654
+ # patched size
655
+ h = h // p1
656
+ w = w // p2
657
+
658
+ assert h <= self.pos_embed_max_size[0], (h, self.pos_embed_max_size[0])
659
+ assert w <= self.pos_embed_max_size[1], (w, self.pos_embed_max_size[1])
660
+ top = (self.pos_embed_max_size[0] - h) // 2
661
+ left = (self.pos_embed_max_size[1] - w) // 2
662
+ spatial_pos_embed = rearrange(
663
+ self.pos_embed,
664
+ "1 (h w) c -> 1 h w c",
665
+ h=self.pos_embed_max_size[0],
666
+ w=self.pos_embed_max_size[1],
667
+ )
668
+ spatial_pos_embed = spatial_pos_embed[:, top : top + h, left : left + w, :]
669
+ spatial_pos_embed = rearrange(spatial_pos_embed, "1 h w c -> 1 (h w) c")
670
+ return spatial_pos_embed
671
+
672
+ def unpatchify(self, x, hw=None):
673
+ """
674
+ x: (N, T, patch_size**2 * C)
675
+ imgs: (N, H, W, C)
676
+ """
677
+ c = self.out_channels
678
+ p1, p2 = self.x_embedder.patch_size
679
+ h, w = hw
680
+ # patched size
681
+ h = h // p1
682
+ w = w // p2
683
+ assert h * w == x.shape[1]
684
+
685
+ h_1, w_1 = self.x_embedder.img_size
686
+
687
+ x = x.reshape(shape=(x.shape[0], h, w, p1, p2, c))
688
+ x = torch.einsum('nhwpqc->nchpwq', x)
689
+ imgs = x.reshape(shape=(x.shape[0], c, h_1, w_1))
690
+
691
+ return imgs
692
+
693
+ def forward_core_with_concat(
694
+ self, x: torch.Tensor, c_mod: torch.Tensor, context: Optional[torch.Tensor] = None,
695
+ detach: Optional[bool] = False) -> torch.Tensor:
696
+ if self.register_length > 0:
697
+ context = torch.cat((repeat(self.register, "1 ... -> b ...", b=x.shape[0]), context if context is not None else torch.Tensor([]).type_as(x)), 1)
698
+
699
+ # context is B, L', D
700
+ # x is B, L, D
701
+ B, L, D = x.shape
702
+ for i, block in enumerate(self.joint_blocks):
703
+ context, x = block(context, x, c=c_mod)
704
+
705
+ if (i + 1) == self.encoder_depth:
706
+ zs = [projector(x) for projector in self.projectors]
707
+
708
+ x = self.final_layer(x, c_mod) # (N, T, patch_size ** 2 * out_channels)
709
+ return x, zs
710
+
711
+ def forward(
712
+ self, x: torch.Tensor, t: torch.Tensor, y: Optional[torch.Tensor] = None, context: Optional[torch.Tensor] = None, do_guidance=False,
713
+ detach: Optional[bool] = False) -> torch.Tensor:
714
+ """
715
+ Forward pass of DiT.
716
+ x: (N, C, H, W) tensor of spatial inputs (images or latent representations of images)
717
+ t: (N,) tensor of diffusion timesteps
718
+ y: (N,) tensor of class labels
719
+ """
720
+ hw = x.shape[-2:]
721
+ x = self.x_embedder(x) + self.cropped_pos_embed(hw)
722
+ c = self.t_embedder(t, dtype=x.dtype) # (N, D)
723
+
724
+ context = self.context_embedder(context)
725
+
726
+ if self.training and not do_guidance:
727
+ cond_mask = prob_mask_like((context.shape[0],), prob = 1 - 0.1, device = context.device) # classifier free guidance
728
+ cond_mask = cond_mask.to(context.dtype)
729
+ context = cond_mask.view(-1, 1, 1) * context
730
+ elif do_guidance:
731
+ N = x.shape[0]
732
+ half_bs = N // 2
733
+ cond_mask = torch.cat((torch.ones(half_bs), torch.zeros(N - half_bs))).to(context.device)
734
+ cond_mask = cond_mask.to(context.dtype)
735
+ context = cond_mask.view(-1, 1, 1) * context
736
+ else:
737
+ cond_mask = torch.ones(context.shape[0], device = context.device, dtype = torch.bool)
738
+
739
+ if y is not None:
740
+ y = self.y_embedder(y)
741
+ y = cond_mask.view(-1, 1) * y
742
+ c = c + y # (N, D)
743
+
744
+ x, zs = self.forward_core_with_concat(x, c, context, detach)
745
+
746
+ x = self.unpatchify(x, hw=hw) # (N, out_channels, H, W)
747
+ return x, zs
onset_util.py ADDED
@@ -0,0 +1,446 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import torch.nn as nn
3
+ from torchvision.io import read_video
4
+ import os
5
+ from einops import rearrange
6
+ import torchvision.transforms as transforms
7
+ from cavp_util import reencode_video_with_diff_fps
8
+
9
+
10
+ def extract_onset(video_path, onset_model, tmp_path, device="cuda"):
11
+ """Extract onset features from video using a pre-trained onset detection model."""
12
+ # Preprocess the video frames
13
+ transform = transforms.Compose([
14
+ transforms.Resize((128, 128), antialias=True),
15
+ transforms.CenterCrop((112, 112)),
16
+ transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
17
+ ])
18
+
19
+ start_second = 0
20
+ truncate_second = 10
21
+ # Load the video, change fps:
22
+ video_path_low_fps = reencode_video_with_diff_fps(video_path, tmp_path, 15, start_second, truncate_second)
23
+ frames, _, _ = read_video(video_path_low_fps, pts_unit="sec", output_format="TCHW")
24
+ if frames.shape[0] >= 150:
25
+ frames = frames[:150]
26
+ elif frames.shape[0] >= 120:
27
+ frames = frames[:120]
28
+
29
+ # Transform frames
30
+ frames = frames / 255.0
31
+ frames = transform(frames)
32
+
33
+ frames = rearrange(frames, '(b t) c h w -> b c t h w', t=30).to(device)
34
+
35
+ # Forward pass through the model to get onset features
36
+ with torch.no_grad():
37
+ onset_features = onset_model(frames).reshape(-1)
38
+
39
+ # Remove the file
40
+ os.remove(video_path_low_fps)
41
+ return onset_features.detach().cpu().numpy()
42
+
43
+ #################################################################################
44
+ # ResNet #
45
+ #################################################################################
46
+
47
+ __all__ = ['r3d_18', 'mc3_18', 'r2plus1d_18']
48
+
49
+ model_urls = {
50
+ 'r3d_18': 'https://download.pytorch.org/models/r3d_18-b3b3357e.pth',
51
+ 'mc3_18': 'https://download.pytorch.org/models/mc3_18-a90a0ba3.pth',
52
+ 'r2plus1d_18': 'https://download.pytorch.org/models/r2plus1d_18-91a641e6.pth',
53
+ }
54
+
55
+
56
+ class Conv3DSimple(nn.Conv3d):
57
+ def __init__(self,
58
+ in_planes,
59
+ out_planes,
60
+ midplanes=None,
61
+ stride=1,
62
+ padding=1):
63
+
64
+ super(Conv3DSimple, self).__init__(
65
+ in_channels=in_planes,
66
+ out_channels=out_planes,
67
+ kernel_size=(3, 3, 3),
68
+ stride=stride,
69
+ padding=padding,
70
+ bias=False)
71
+
72
+ @staticmethod
73
+ def get_downsample_stride(stride):
74
+ return stride, stride, stride
75
+
76
+
77
+ class Conv2Plus1D(nn.Sequential):
78
+
79
+ def __init__(self,
80
+ in_planes,
81
+ out_planes,
82
+ midplanes,
83
+ stride=1,
84
+ padding=1):
85
+ super(Conv2Plus1D, self).__init__(
86
+ nn.Conv3d(in_planes, midplanes, kernel_size=(1, 3, 3),
87
+ stride=(1, stride, stride), padding=(0, padding, padding),
88
+ bias=False),
89
+ nn.BatchNorm3d(midplanes),
90
+ nn.ReLU(inplace=True),
91
+ nn.Conv3d(midplanes, out_planes, kernel_size=(3, 1, 1),
92
+ stride=(stride, 1, 1), padding=(padding, 0, 0),
93
+ bias=False))
94
+
95
+ @staticmethod
96
+ def get_downsample_stride(stride):
97
+ return stride, stride, stride
98
+
99
+
100
+ class Conv3DNoTemporal(nn.Conv3d):
101
+
102
+ def __init__(self,
103
+ in_planes,
104
+ out_planes,
105
+ midplanes=None,
106
+ stride=1,
107
+ padding=1):
108
+
109
+ super(Conv3DNoTemporal, self).__init__(
110
+ in_channels=in_planes,
111
+ out_channels=out_planes,
112
+ kernel_size=(1, 3, 3),
113
+ stride=(1, stride, stride),
114
+ padding=(0, padding, padding),
115
+ bias=False)
116
+
117
+ @staticmethod
118
+ def get_downsample_stride(stride):
119
+ return 1, stride, stride
120
+
121
+
122
+ class BasicBlock(nn.Module):
123
+
124
+ expansion = 1
125
+
126
+ def __init__(self, inplanes, planes, conv_builder, stride=1, downsample=None):
127
+ midplanes = (inplanes * planes * 3 * 3 *
128
+ 3) // (inplanes * 3 * 3 + 3 * planes)
129
+
130
+ super(BasicBlock, self).__init__()
131
+ self.conv1 = nn.Sequential(
132
+ conv_builder(inplanes, planes, midplanes, stride),
133
+ nn.BatchNorm3d(planes),
134
+ nn.ReLU(inplace=True)
135
+ )
136
+ self.conv2 = nn.Sequential(
137
+ conv_builder(planes, planes, midplanes),
138
+ nn.BatchNorm3d(planes)
139
+ )
140
+ self.relu = nn.ReLU(inplace=True)
141
+ self.downsample = downsample
142
+ self.stride = stride
143
+
144
+ def forward(self, x):
145
+ residual = x
146
+
147
+ out = self.conv1(x)
148
+ out = self.conv2(out)
149
+ if self.downsample is not None:
150
+ residual = self.downsample(x)
151
+
152
+ out += residual
153
+ out = self.relu(out)
154
+
155
+ return out
156
+
157
+
158
+ class Bottleneck(nn.Module):
159
+ expansion = 4
160
+
161
+ def __init__(self, inplanes, planes, conv_builder, stride=1, downsample=None):
162
+
163
+ super(Bottleneck, self).__init__()
164
+ midplanes = (inplanes * planes * 3 * 3 *
165
+ 3) // (inplanes * 3 * 3 + 3 * planes)
166
+
167
+ # 1x1x1
168
+ self.conv1 = nn.Sequential(
169
+ nn.Conv3d(inplanes, planes, kernel_size=1, bias=False),
170
+ nn.BatchNorm3d(planes),
171
+ nn.ReLU(inplace=True)
172
+ )
173
+ # Second kernel
174
+ self.conv2 = nn.Sequential(
175
+ conv_builder(planes, planes, midplanes, stride),
176
+ nn.BatchNorm3d(planes),
177
+ nn.ReLU(inplace=True)
178
+ )
179
+
180
+ # 1x1x1
181
+ self.conv3 = nn.Sequential(
182
+ nn.Conv3d(planes, planes * self.expansion,
183
+ kernel_size=1, bias=False),
184
+ nn.BatchNorm3d(planes * self.expansion)
185
+ )
186
+ self.relu = nn.ReLU(inplace=True)
187
+ self.downsample = downsample
188
+ self.stride = stride
189
+
190
+ def forward(self, x):
191
+ residual = x
192
+
193
+ out = self.conv1(x)
194
+ out = self.conv2(out)
195
+ out = self.conv3(out)
196
+
197
+ if self.downsample is not None:
198
+ residual = self.downsample(x)
199
+
200
+ out += residual
201
+ out = self.relu(out)
202
+
203
+ return out
204
+
205
+
206
+ class BasicStem(nn.Sequential):
207
+ """The default conv-batchnorm-relu stem
208
+ """
209
+
210
+ def __init__(self):
211
+ super(BasicStem, self).__init__(
212
+ nn.Conv3d(3, 64, kernel_size=(3, 7, 7), stride=(1, 2, 2),
213
+ padding=(1, 3, 3), bias=False),
214
+ nn.BatchNorm3d(64),
215
+ nn.ReLU(inplace=True))
216
+
217
+
218
+ class R2Plus1dStem(nn.Sequential):
219
+ """R(2+1)D stem is different than the default one as it uses separated 3D convolution
220
+ """
221
+
222
+ def __init__(self):
223
+ super(R2Plus1dStem, self).__init__(
224
+ nn.Conv3d(3, 45, kernel_size=(1, 7, 7),
225
+ stride=(1, 2, 2), padding=(0, 3, 3),
226
+ bias=False),
227
+ nn.BatchNorm3d(45),
228
+ nn.ReLU(inplace=True),
229
+ nn.Conv3d(45, 64, kernel_size=(3, 1, 1),
230
+ stride=(1, 1, 1), padding=(1, 0, 0),
231
+ bias=False),
232
+ nn.BatchNorm3d(64),
233
+ nn.ReLU(inplace=True))
234
+
235
+
236
+ class VideoResNet(nn.Module):
237
+
238
+ def __init__(self, block, conv_makers, layers,
239
+ stem, num_classes=400,
240
+ zero_init_residual=False):
241
+ """Generic resnet video generator.
242
+ Args:
243
+ block (nn.Module): resnet building block
244
+ conv_makers (list(functions)): generator function for each layer
245
+ layers (List[int]): number of blocks per layer
246
+ stem (nn.Module, optional): Resnet stem, if None, defaults to conv-bn-relu. Defaults to None.
247
+ num_classes (int, optional): Dimension of the final FC layer. Defaults to 400.
248
+ zero_init_residual (bool, optional): Zero init bottleneck residual BN. Defaults to False.
249
+ """
250
+ super(VideoResNet, self).__init__()
251
+ self.inplanes = 64
252
+
253
+ self.stem = stem()
254
+
255
+ self.layer1 = self._make_layer(
256
+ block, conv_makers[0], 64, layers[0], stride=1)
257
+ self.layer2 = self._make_layer(
258
+ block, conv_makers[1], 128, layers[1], stride=2)
259
+ self.layer3 = self._make_layer(
260
+ block, conv_makers[2], 256, layers[2], stride=2)
261
+ self.layer4 = self._make_layer(
262
+ block, conv_makers[3], 512, layers[3], stride=2)
263
+
264
+ self.avgpool = nn.AdaptiveAvgPool3d((1, 1, 1))
265
+ self.fc = nn.Linear(512 * block.expansion, num_classes)
266
+
267
+ # init weights
268
+ self._initialize_weights()
269
+
270
+ if zero_init_residual:
271
+ for m in self.modules():
272
+ if isinstance(m, Bottleneck):
273
+ nn.init.constant_(m.bn3.weight, 0)
274
+
275
+ def forward(self, x):
276
+ x = self.stem(x)
277
+
278
+ x = self.layer1(x)
279
+ x = self.layer2(x)
280
+ x = self.layer3(x)
281
+ x = self.layer4(x)
282
+
283
+ x = self.avgpool(x)
284
+ # Flatten the layer to fc
285
+ # x = x.flatten(1)
286
+ # x = self.fc(x)
287
+ N = x.shape[0]
288
+ x = x.squeeze()
289
+ if N == 1:
290
+ x = x[None]
291
+
292
+ return x
293
+
294
+ def _make_layer(self, block, conv_builder, planes, blocks, stride=1):
295
+ downsample = None
296
+
297
+ if stride != 1 or self.inplanes != planes * block.expansion:
298
+ ds_stride = conv_builder.get_downsample_stride(stride)
299
+ downsample = nn.Sequential(
300
+ nn.Conv3d(self.inplanes, planes * block.expansion,
301
+ kernel_size=1, stride=ds_stride, bias=False),
302
+ nn.BatchNorm3d(planes * block.expansion)
303
+ )
304
+ layers = []
305
+ layers.append(block(self.inplanes, planes,
306
+ conv_builder, stride, downsample))
307
+
308
+ self.inplanes = planes * block.expansion
309
+ for i in range(1, blocks):
310
+ layers.append(block(self.inplanes, planes, conv_builder))
311
+
312
+ return nn.Sequential(*layers)
313
+
314
+ def _initialize_weights(self):
315
+ for m in self.modules():
316
+ if isinstance(m, nn.Conv3d):
317
+ nn.init.kaiming_normal_(m.weight, mode='fan_out',
318
+ nonlinearity='relu')
319
+ if m.bias is not None:
320
+ nn.init.constant_(m.bias, 0)
321
+ elif isinstance(m, nn.BatchNorm3d):
322
+ nn.init.constant_(m.weight, 1)
323
+ nn.init.constant_(m.bias, 0)
324
+ elif isinstance(m, nn.Linear):
325
+ nn.init.normal_(m.weight, 0, 0.01)
326
+ nn.init.constant_(m.bias, 0)
327
+
328
+
329
+ def _video_resnet(arch, pretrained=False, progress=True, **kwargs):
330
+ model = VideoResNet(**kwargs)
331
+
332
+ if pretrained:
333
+ state_dict = load_state_dict_from_url(model_urls[arch],
334
+ progress=progress)
335
+ model.load_state_dict(state_dict)
336
+ return model
337
+
338
+
339
+ def r3d_18(pretrained=False, progress=True, **kwargs):
340
+ """Construct 18 layer Resnet3D model as in
341
+ https://arxiv.org/abs/1711.11248
342
+ Args:
343
+ pretrained (bool): If True, returns a model pre-trained on Kinetics-400
344
+ progress (bool): If True, displays a progress bar of the download to stderr
345
+ Returns:
346
+ nn.Module: R3D-18 network
347
+ """
348
+
349
+ return _video_resnet('r3d_18',
350
+ pretrained, progress,
351
+ block=BasicBlock,
352
+ conv_makers=[Conv3DSimple] * 4,
353
+ layers=[2, 2, 2, 2],
354
+ stem=BasicStem, **kwargs)
355
+
356
+
357
+ def mc3_18(pretrained=False, progress=True, **kwargs):
358
+ """Constructor for 18 layer Mixed Convolution network as in
359
+ https://arxiv.org/abs/1711.11248
360
+ Args:
361
+ pretrained (bool): If True, returns a model pre-trained on Kinetics-400
362
+ progress (bool): If True, displays a progress bar of the download to stderr
363
+ Returns:
364
+ nn.Module: MC3 Network definition
365
+ """
366
+ return _video_resnet('mc3_18',
367
+ pretrained, progress,
368
+ block=BasicBlock,
369
+ conv_makers=[Conv3DSimple] + [Conv3DNoTemporal] * 3,
370
+ layers=[2, 2, 2, 2],
371
+ stem=BasicStem, **kwargs)
372
+
373
+
374
+ def r2plus1d_18(pretrained=False, progress=True, **kwargs):
375
+ """Constructor for the 18 layer deep R(2+1)D network as in
376
+ https://arxiv.org/abs/1711.11248
377
+ Args:
378
+ pretrained (bool): If True, returns a model pre-trained on Kinetics-400
379
+ progress (bool): If True, displays a progress bar of the download to stderr
380
+ Returns:
381
+ nn.Module: R(2+1)D-18 network
382
+ """
383
+ return _video_resnet('r2plus1d_18',
384
+ pretrained, progress,
385
+ block=BasicBlock,
386
+ conv_makers=[Conv2Plus1D] * 4,
387
+ layers=[2, 2, 2, 2],
388
+ stem=R2Plus1dStem, **kwargs)
389
+
390
+
391
+ #################################################################################
392
+ # Onset Net #
393
+ #################################################################################
394
+
395
+ class R2plus1d18KeepTemp(nn.Module):
396
+
397
+ def __init__(self, pretrained=True):
398
+ super().__init__()
399
+
400
+ self.model = r2plus1d_18(pretrained=pretrained)
401
+
402
+ self.model.layer2[0].conv1[0][3] = nn.Conv3d(230, 128, kernel_size=(3, 1, 1),
403
+ stride=(1, 1, 1), padding=(1, 0, 0), bias=False)
404
+ self.model.layer2[0].downsample = nn.Sequential(
405
+ nn.Conv3d(64, 128, kernel_size=(1, 1, 1), stride=(1, 2, 2), bias=False),
406
+ nn.BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
407
+ )
408
+ self.model.layer3[0].conv1[0][3] = nn.Conv3d(460, 256, kernel_size=(3, 1, 1),
409
+ stride=(1, 1, 1), padding=(1, 0, 0), bias=False)
410
+ self.model.layer3[0].downsample = nn.Sequential(
411
+ nn.Conv3d(128, 256, kernel_size=(1, 1, 1), stride=(1, 2, 2), bias=False),
412
+ nn.BatchNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
413
+ )
414
+ self.model.layer4[0].conv1[0][3] = nn.Conv3d(921, 512, kernel_size=(3, 1, 1),
415
+ stride=(1, 1, 1), padding=(1, 0, 0), bias=False)
416
+ self.model.layer4[0].downsample = nn.Sequential(
417
+ nn.Conv3d(256, 512, kernel_size=(1, 1, 1), stride=(1, 2, 2), bias=False),
418
+ nn.BatchNorm3d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
419
+ )
420
+ self.model.avgpool = nn.AdaptiveAvgPool3d((None, 1, 1))
421
+ self.model.fc = nn.Identity()
422
+
423
+ def forward(self, x):
424
+ x = self.model(x)
425
+ return x
426
+
427
+
428
+ class VideoOnsetNet(nn.Module):
429
+ # Video Onset detection network
430
+ def __init__(self, pretrained=False):
431
+ super(VideoOnsetNet, self).__init__()
432
+ self.net = R2plus1d18KeepTemp(pretrained=pretrained)
433
+ self.fc = nn.Sequential(
434
+ nn.Linear(512, 128),
435
+ nn.ReLU(True),
436
+ nn.Linear(128, 1)
437
+ )
438
+
439
+ def forward(self, x):
440
+ x = self.net(x)
441
+ x = x.transpose(-1, -2)
442
+ x = self.fc(x)
443
+ x = x.squeeze(-1)
444
+
445
+ return x
446
+
preprocess/extract_cavp.py ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import torch
3
+ import numpy as np
4
+ from tqdm import tqdm
5
+ from argparse import ArgumentParser
6
+
7
+ import sys
8
+ sys.path.append(os.getcwd())
9
+ from cavp_util import Extract_CAVP_Features
10
+
11
+ def main():
12
+ parser = ArgumentParser(description="Inference script parameters")
13
+ parser.add_argument("--video_folder_path", type=str, default="./input_videos", required=True, help="Path to the input video folder")
14
+ parser.add_argument("--save_folder_path", type=str, default="./output", help="Folder to save output files")
15
+ parser.add_argument("--cavp_config_path", type=str, default="./cavp.yaml", help="Path to CAVP config file")
16
+ parser.add_argument("--cavp_ckpt_path", type=str, default="./cavp_epoch66.ckpt", help="Path to CAVP checkpoint file")
17
+
18
+ args = parser.parse_args()
19
+
20
+ device = "cuda" if torch.cuda.is_available() else "cpu"
21
+ extract_cavp = Extract_CAVP_Features(device=device, config_path=args.cavp_config_path, ckpt_path=args.cavp_ckpt_path)
22
+
23
+ os.makedirs(os.path.join(args.save_folder_path, "cavp_feats"), exist_ok=True)
24
+
25
+ data_list = [file for file in os.listdir(args.video_folder_path) if file.endswith(".mp4")]
26
+ data_list = sorted(data_list)
27
+
28
+ for _, video_file in enumerate(tqdm(data_list, desc="Extracting CAVP features", total=len(data_list))):
29
+ video_path = os.path.join(args.video_folder_path, video_file)
30
+ try:
31
+ cavp_feats = extract_cavp(video_path, tmp_path=args.save_folder_path)
32
+ # Save cavp_feats as npz file
33
+ base_name = os.path.splitext(os.path.basename(video_file))[0]
34
+ np.savez(os.path.join(args.save_folder_path, "cavp_feats", f"{base_name}.npz"), cavp_feats)
35
+ except Exception as e:
36
+ print(f"Error processing {video_file}: {e}")
37
+
38
+ print("========================================FINISH CAVP EXTRACTION===========================================")
39
+
40
+
41
+ if __name__ == "__main__":
42
+ main()
preprocess/extract_fbank.py ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import torch
3
+ import torchaudio
4
+ import numpy as np
5
+ from tqdm import tqdm
6
+ import soundfile as sf
7
+ from argparse import ArgumentParser
8
+
9
+ def main():
10
+ parser = ArgumentParser(description="Inference script parameters")
11
+ parser.add_argument("--wav_folder_path", type=str, default="./input_wavs", required=True, help="Path to the input video folder")
12
+ parser.add_argument("--save_folder_path", type=str, default="./output", help="Folder to save output files")
13
+
14
+ args = parser.parse_args()
15
+
16
+ os.makedirs(os.path.join(args.save_folder_path, "fbank"), exist_ok=True)
17
+
18
+ target_length = 1024
19
+ norm_mean = -4.268
20
+ norm_std = 4.569
21
+
22
+ # Loop over all .wav files in the audio folder
23
+ for filename in tqdm(os.listdir(args.wav_folder_path)):
24
+ if filename.endswith('.wav'):
25
+ # Load the audio file
26
+ source_file = os.path.join(args.wav_folder_path, filename)
27
+ wav, sr = sf.read(source_file)
28
+ if len(wav.shape) > 1:
29
+ wav = wav[:, 0]
30
+
31
+ source = torch.from_numpy(wav).float()
32
+ if not sr == 16e3:
33
+ source = torchaudio.functional.resample(source, orig_freq=sr, new_freq=16000).float()
34
+
35
+ source = source - source.mean()
36
+ source = source.unsqueeze(dim=0)
37
+ source = torchaudio.compliance.kaldi.fbank(source, htk_compat=True, sample_frequency=16000, use_energy=False,
38
+ window_type='hanning', num_mel_bins=128, dither=0.0, frame_shift=10).unsqueeze(dim=0)
39
+
40
+ n_frames = source.shape[1]
41
+ diff = target_length - n_frames
42
+ if diff > 0:
43
+ m = torch.nn.ZeroPad2d((0, 0, 0, diff))
44
+ source = m(source)
45
+ elif diff < 0:
46
+ source = source[:,0:target_length, :]
47
+ source = (source - norm_mean) / (norm_std * 2)
48
+
49
+ # Save the spectrogram as .npy file
50
+ output_filename = os.path.splitext(filename)[0] + '.npy'
51
+ output_path = os.path.join(args.save_folder_path, "fbank", output_filename)
52
+
53
+ np.save(output_path, source.squeeze(0).numpy())
54
+
55
+ print("========================================FINISH FBANK EXTRACTION===========================================")
56
+
57
+ if __name__ == "__main__":
58
+ main()
preprocess/extract_mel.py ADDED
@@ -0,0 +1,333 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import torchaudio
3
+ import numpy as np
4
+ from tqdm import tqdm
5
+ import torch
6
+ import torch.nn.functional as F
7
+ from scipy.signal import get_window
8
+ from librosa.util import pad_center, tiny, normalize
9
+ from librosa.filters import mel as librosa_mel_fn
10
+ from argparse import ArgumentParser
11
+
12
+ def window_sumsquare(
13
+ window,
14
+ n_frames,
15
+ hop_length,
16
+ win_length,
17
+ n_fft,
18
+ dtype=np.float32,
19
+ norm=None,
20
+ ):
21
+ if win_length is None:
22
+ win_length = n_fft
23
+
24
+ n = n_fft + hop_length * (n_frames - 1)
25
+ x = np.zeros(n, dtype=dtype)
26
+
27
+ # Compute the squared window at the desired length
28
+ win_sq = get_window(window, win_length, fftbins=True)
29
+ win_sq = normalize(win_sq, norm=norm) ** 2
30
+ win_sq = pad_center(win_sq, size=n_fft)
31
+
32
+ # Fill the envelope
33
+ for i in range(n_frames):
34
+ sample = i * hop_length
35
+ x[sample : min(n, sample + n_fft)] += win_sq[: max(0, min(n_fft, n - sample))]
36
+ return x
37
+
38
+ def dynamic_range_compression(x, normalize_fun=torch.log, C=1, clip_val=1e-5):
39
+ """
40
+ PARAMS
41
+ ------
42
+ C: compression factor
43
+ """
44
+ return normalize_fun(torch.clamp(x, min=clip_val) * C)
45
+
46
+
47
+ def dynamic_range_decompression(x, C=1):
48
+ """
49
+ PARAMS
50
+ ------
51
+ C: compression factor used to compress
52
+ """
53
+ return torch.exp(x) / C
54
+
55
+
56
+ class STFT(torch.nn.Module):
57
+ """adapted from Prem Seetharaman's https://github.com/pseeth/pytorch-stft"""
58
+
59
+ def __init__(self, filter_length, hop_length, win_length, window="hann"):
60
+ super(STFT, self).__init__()
61
+ self.filter_length = filter_length
62
+ self.hop_length = hop_length
63
+ self.win_length = win_length
64
+ self.window = window
65
+ self.forward_transform = None
66
+ scale = self.filter_length / self.hop_length
67
+ fourier_basis = np.fft.fft(np.eye(self.filter_length))
68
+
69
+ cutoff = int((self.filter_length / 2 + 1))
70
+ fourier_basis = np.vstack(
71
+ [np.real(fourier_basis[:cutoff, :]), np.imag(fourier_basis[:cutoff, :])]
72
+ )
73
+
74
+ forward_basis = torch.FloatTensor(fourier_basis[:, None, :])
75
+ inverse_basis = torch.FloatTensor(
76
+ np.linalg.pinv(scale * fourier_basis).T[:, None, :]
77
+ )
78
+
79
+ if window is not None:
80
+ assert filter_length >= win_length
81
+ # get window and zero center pad it to filter_length
82
+ fft_window = get_window(window, win_length, fftbins=True)
83
+ fft_window = pad_center(fft_window, size=filter_length)
84
+ fft_window = torch.from_numpy(fft_window).float()
85
+
86
+ # window the bases
87
+ forward_basis *= fft_window
88
+ inverse_basis *= fft_window
89
+
90
+ self.register_buffer("forward_basis", forward_basis.float())
91
+ self.register_buffer("inverse_basis", inverse_basis.float())
92
+
93
+ def transform(self, input_data):
94
+ num_batches = input_data.size(0)
95
+ num_samples = input_data.size(1)
96
+
97
+ self.num_samples = num_samples
98
+
99
+ # similar to librosa, reflect-pad the input
100
+ input_data = input_data.view(num_batches, 1, num_samples)
101
+ input_data = F.pad(
102
+ input_data.unsqueeze(1),
103
+ (int(self.filter_length / 2), int(self.filter_length / 2), 0, 0),
104
+ mode="reflect",
105
+ )
106
+ input_data = input_data.squeeze(1)
107
+
108
+ forward_transform = F.conv1d(
109
+ input_data,
110
+ torch.autograd.Variable(self.forward_basis, requires_grad=False),
111
+ stride=self.hop_length,
112
+ padding=0,
113
+ ).cpu()
114
+
115
+ cutoff = int((self.filter_length / 2) + 1)
116
+ real_part = forward_transform[:, :cutoff, :]
117
+ imag_part = forward_transform[:, cutoff:, :]
118
+
119
+ magnitude = torch.sqrt(real_part**2 + imag_part**2)
120
+ phase = torch.autograd.Variable(torch.atan2(imag_part.data, real_part.data))
121
+
122
+ return magnitude, phase
123
+
124
+ def inverse(self, magnitude, phase):
125
+ recombine_magnitude_phase = torch.cat(
126
+ [magnitude * torch.cos(phase), magnitude * torch.sin(phase)], dim=1
127
+ )
128
+
129
+ inverse_transform = F.conv_transpose1d(
130
+ recombine_magnitude_phase,
131
+ torch.autograd.Variable(self.inverse_basis, requires_grad=False),
132
+ stride=self.hop_length,
133
+ padding=0,
134
+ )
135
+
136
+ if self.window is not None:
137
+ window_sum = window_sumsquare(
138
+ self.window,
139
+ magnitude.size(-1),
140
+ hop_length=self.hop_length,
141
+ win_length=self.win_length,
142
+ n_fft=self.filter_length,
143
+ dtype=np.float32,
144
+ )
145
+ # remove modulation effects
146
+ approx_nonzero_indices = torch.from_numpy(
147
+ np.where(window_sum > tiny(window_sum))[0]
148
+ )
149
+ window_sum = torch.autograd.Variable(
150
+ torch.from_numpy(window_sum), requires_grad=False
151
+ )
152
+ window_sum = window_sum
153
+ inverse_transform[:, :, approx_nonzero_indices] /= window_sum[
154
+ approx_nonzero_indices
155
+ ]
156
+
157
+ # scale by hop ratio
158
+ inverse_transform *= float(self.filter_length) / self.hop_length
159
+
160
+ inverse_transform = inverse_transform[:, :, int(self.filter_length / 2) :]
161
+ inverse_transform = inverse_transform[:, :, : -int(self.filter_length / 2) :]
162
+
163
+ return inverse_transform
164
+
165
+ def forward(self, input_data):
166
+ self.magnitude, self.phase = self.transform(input_data)
167
+ reconstruction = self.inverse(self.magnitude, self.phase)
168
+ return reconstruction
169
+
170
+
171
+ class TacotronSTFT(torch.nn.Module):
172
+ def __init__(
173
+ self,
174
+ filter_length,
175
+ hop_length,
176
+ win_length,
177
+ n_mel_channels,
178
+ sampling_rate,
179
+ mel_fmin,
180
+ mel_fmax,
181
+ ):
182
+ super(TacotronSTFT, self).__init__()
183
+ self.n_mel_channels = n_mel_channels
184
+ self.sampling_rate = sampling_rate
185
+ self.stft_fn = STFT(filter_length, hop_length, win_length)
186
+ mel_basis = librosa_mel_fn(
187
+ sr=sampling_rate, n_fft=filter_length, n_mels=n_mel_channels, fmin=mel_fmin, fmax=mel_fmax
188
+ )
189
+ mel_basis = torch.from_numpy(mel_basis).float()
190
+ self.register_buffer("mel_basis", mel_basis)
191
+
192
+ def spectral_normalize(self, magnitudes, normalize_fun):
193
+ output = dynamic_range_compression(magnitudes, normalize_fun)
194
+ return output
195
+
196
+ def spectral_de_normalize(self, magnitudes):
197
+ output = dynamic_range_decompression(magnitudes)
198
+ return output
199
+
200
+ def mel_spectrogram(self, y, normalize_fun=torch.log):
201
+ assert torch.min(y.data) >= -1, torch.min(y.data)
202
+ assert torch.max(y.data) <= 1, torch.max(y.data)
203
+
204
+ magnitudes, phases = self.stft_fn.transform(y)
205
+ magnitudes = magnitudes.data
206
+ mel_output = torch.matmul(self.mel_basis, magnitudes)
207
+ mel_output = self.spectral_normalize(mel_output, normalize_fun)
208
+ energy = torch.norm(magnitudes, dim=1)
209
+
210
+ log_magnitudes = self.spectral_normalize(magnitudes, normalize_fun)
211
+
212
+ return mel_output, log_magnitudes, energy
213
+
214
+ def get_mel_from_wav(audio, _stft):
215
+ audio = torch.clip(torch.FloatTensor(audio).unsqueeze(0), -1, 1)
216
+ audio = torch.autograd.Variable(audio, requires_grad=False)
217
+ melspec, log_magnitudes_stft, energy = _stft.mel_spectrogram(audio)
218
+ melspec = torch.squeeze(melspec, 0).numpy().astype(np.float32)
219
+ log_magnitudes_stft = (
220
+ torch.squeeze(log_magnitudes_stft, 0).numpy().astype(np.float32)
221
+ )
222
+ energy = torch.squeeze(energy, 0).numpy().astype(np.float32)
223
+ return melspec, log_magnitudes_stft, energy
224
+
225
+ def _pad_spec(fbank, target_length=1024):
226
+ n_frames = fbank.shape[0]
227
+ p = target_length - n_frames
228
+ # cut and pad
229
+ if p > 0:
230
+ m = torch.nn.ZeroPad2d((0, 0, 0, p))
231
+ fbank = m(fbank)
232
+ elif p < 0:
233
+ fbank = fbank[0:target_length, :]
234
+
235
+ if fbank.size(-1) % 2 != 0:
236
+ fbank = fbank[..., :-1]
237
+
238
+ return fbank
239
+
240
+ def pad_wav(waveform, segment_length):
241
+ waveform_length = waveform.shape[-1]
242
+ assert waveform_length > 100, "Waveform is too short, %s" % waveform_length
243
+ if segment_length is None or waveform_length == segment_length:
244
+ return waveform
245
+ elif waveform_length > segment_length:
246
+ return waveform[:segment_length]
247
+ elif waveform_length < segment_length:
248
+ temp_wav = np.zeros((1, segment_length))
249
+ temp_wav[:, :waveform_length] = waveform
250
+ return temp_wav
251
+
252
+ def normalize_wav(waveform):
253
+ waveform = waveform - np.mean(waveform)
254
+ waveform = waveform / (np.max(np.abs(waveform)) + 1e-8)
255
+ return waveform * 0.5
256
+
257
+ def read_wav_file(filename, segment_length):
258
+ waveform, sr = torchaudio.load(filename)
259
+ waveform = torchaudio.functional.resample(waveform, orig_freq=sr, new_freq=16000)
260
+ waveform = waveform.numpy()[0, ...]
261
+ waveform = normalize_wav(waveform)
262
+ waveform = waveform[None, ...]
263
+ waveform = pad_wav(waveform, segment_length)
264
+
265
+ waveform = waveform / np.max(np.abs(waveform))
266
+ waveform = 0.5 * waveform
267
+
268
+ return waveform
269
+
270
+ def wav_to_fbank(filename, target_length=1024, fn_STFT=None):
271
+ assert fn_STFT is not None
272
+
273
+ # mixup
274
+ waveform = read_wav_file(filename, target_length * 160) # hop size is 160
275
+
276
+ waveform = waveform[0, ...]
277
+ waveform = torch.FloatTensor(waveform)
278
+
279
+ fbank, log_magnitudes_stft, energy = get_mel_from_wav(waveform, fn_STFT)
280
+
281
+ fbank = torch.FloatTensor(fbank.T)
282
+ log_magnitudes_stft = torch.FloatTensor(log_magnitudes_stft.T)
283
+
284
+ fbank, log_magnitudes_stft = _pad_spec(fbank, target_length), _pad_spec(
285
+ log_magnitudes_stft, target_length
286
+ )
287
+
288
+ return fbank, log_magnitudes_stft, waveform
289
+
290
+ def main():
291
+ parser = ArgumentParser(description="Inference script parameters")
292
+ parser.add_argument("--wav_folder_path", type=str, default="./input_wavs", required=True, help="Path to the input video folder")
293
+ parser.add_argument("--save_folder_path", type=str, default="./output", help="Folder to save output files")
294
+
295
+ args = parser.parse_args()
296
+
297
+ os.makedirs(os.path.join(args.save_folder_path, "melspec"), exist_ok=True)
298
+
299
+ # Parameters
300
+ filter_length = 1024
301
+ hop_length = 160
302
+ win_length = 1024
303
+ n_mel_channels = 64
304
+ sampling_rate = 16000
305
+ mel_fmin = 0
306
+ mel_fmax = 8000
307
+ duration = 10
308
+
309
+ fn_STFT = TacotronSTFT(
310
+ filter_length,
311
+ hop_length,
312
+ win_length,
313
+ n_mel_channels,
314
+ sampling_rate,
315
+ mel_fmin,
316
+ mel_fmax,
317
+ )
318
+
319
+ for filename in tqdm(os.listdir(args.wav_folder_path)):
320
+ if filename.endswith('.wav'):
321
+ original_audio_file_path = os.path.join(args.wav_folder_path, filename)
322
+ mel, _, _ = wav_to_fbank(
323
+ original_audio_file_path, target_length=int(duration * 102.4), fn_STFT=fn_STFT
324
+ )
325
+ output_filename = os.path.splitext(filename)[0] + '.npy'
326
+ output_path = os.path.join(args.save_folder_path, "melspec", output_filename)
327
+ np.save(output_path, mel.numpy())
328
+
329
+ print("========================================FINISH MELSPEC EXTRACTION===========================================")
330
+
331
+
332
+ if __name__ == "__main__":
333
+ main()
preprocess/extract_onset.py ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import torch
3
+ import numpy as np
4
+ from tqdm import tqdm
5
+ from argparse import ArgumentParser
6
+
7
+ import sys
8
+ sys.path.append(os.getcwd())
9
+ from onset_util import VideoOnsetNet, extract_onset
10
+
11
+ def main():
12
+ parser = ArgumentParser(description="Inference script parameters")
13
+ parser.add_argument("--video_folder_path", type=str, default="./input_videos", required=True, help="Path to the input video folder")
14
+ parser.add_argument("--save_folder_path", type=str, default="./output", help="Folder to save output files")
15
+ parser.add_argument("--onset_ckpt_path", type=str, default="./onset_ckpt.ckpt", help="Path to onset checkpoint")
16
+
17
+ args = parser.parse_args()
18
+
19
+ device = "cuda" if torch.cuda.is_available() else "cpu"
20
+ # Load the pre-trained onset detection model
21
+ state_dict = torch.load(args.onset_ckpt_path)["state_dict"]
22
+ new_state_dict = {}
23
+ for key, value in state_dict.items():
24
+ if "model.net.model" in key:
25
+ new_key = key.replace("model.net.model", "net.model") # Adjust the key as needed
26
+ elif "model.fc." in key:
27
+ new_key = key.replace("model.fc", "fc") # Adjust the key as needed
28
+ new_state_dict[new_key] = value
29
+ onset_model = VideoOnsetNet(False).to(device)
30
+ onset_model.load_state_dict(new_state_dict)
31
+ onset_model.eval()
32
+
33
+ os.makedirs(os.path.join(args.save_folder_path, "onset_feats"), exist_ok=True)
34
+
35
+ data_list = [file for file in os.listdir(args.video_folder_path) if file.endswith(".mp4")]
36
+ data_list = sorted(data_list)
37
+
38
+ for _, video_file in enumerate(tqdm(data_list, desc="Extracting Onset features", total=len(data_list))):
39
+ video_path = os.path.join(args.video_folder_path, video_file)
40
+ try:
41
+ onset_feats = extract_onset(video_path, onset_model, tmp_path=args.save_folder_path, device=device)
42
+ # Save cavp_feats as npz file
43
+ base_name = os.path.splitext(os.path.basename(video_file))[0]
44
+ np.savez(os.path.join(args.save_folder_path, "onset_feats", f"{base_name}.npz"), onset_feats)
45
+ except Exception as e:
46
+ print(f"Error processing {video_file}: {e}")
47
+
48
+ print("========================================FINISH CAVP EXTRACTION===========================================")
49
+
50
+
51
+ if __name__ == "__main__":
52
+ main()
preprocess_audio.sh ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+
3
+ # Set common paths
4
+ WAV_FOLDER_PATH="./VGGSound/audios"
5
+ SAVE_FOLDER_PATH="./output"
6
+
7
+ echo "Starting audio preprocessing..."
8
+ echo "WAV folder: $WAV_FOLDER_PATH"
9
+ echo "Save folder: $SAVE_FOLDER_PATH"
10
+
11
+ # Extract mel spectrograms
12
+ echo "Extracting mel spectrograms..."
13
+ CUDA_VISIBLE_DEVICES=7 python preprocess/extract_mel.py \
14
+ --wav_folder_path $WAV_FOLDER_PATH \
15
+ --save_folder_path $SAVE_FOLDER_PATH
16
+
17
+ # Extract fbank features
18
+ echo "Extracting fbank features..."
19
+ CUDA_VISIBLE_DEVICES=7 python preprocess/extract_fbank.py \
20
+ --wav_folder_path $WAV_FOLDER_PATH \
21
+ --save_folder_path $SAVE_FOLDER_PATH
22
+
23
+ echo "Audio preprocessing completed!"
preprocess_video.sh ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+
3
+ # Set common paths
4
+ VIDEO_FOLDER_PATH="./VGGSound/videos"
5
+ SAVE_FOLDER_PATH="./output"
6
+
7
+ echo "Starting video preprocessing..."
8
+ echo "Video folder: $VIDEO_FOLDER_PATH"
9
+ echo "Save folder: $SAVE_FOLDER_PATH"
10
+
11
+ # Extract CAVP features
12
+ echo "Extracting CAVP features..."
13
+ CUDA_VISIBLE_DEVICES=0 python preprocess/extract_cavp.py \
14
+ --video_folder_path $VIDEO_FOLDER_PATH \
15
+ --save_folder_path $SAVE_FOLDER_PATH \
16
+ --cavp_config_path ./cavp/cavp.yaml \
17
+ --cavp_ckpt_path ./ckpts/cavp_epoch66.ckpt
18
+
19
+ # Extract onset features
20
+ echo "Extracting onset features..."
21
+ CUDA_VISIBLE_DEVICES=0 python preprocess/extract_onset.py \
22
+ --video_folder_path $VIDEO_FOLDER_PATH \
23
+ --save_folder_path $SAVE_FOLDER_PATH \
24
+ --onset_ckpt_path ./ckpts/onset_model.ckpt
25
+
26
+ echo "Video preprocessing completed!"
requirements.txt ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ accelerate==1.2.0
2
+ av==10.0.0
3
+ diffusers==0.31.0
4
+ einops==0.8.0
5
+ ffmpeg-python==0.2.0
6
+ h5py==3.10.0
7
+ librosa==0.10.2.post1
8
+ mmcv==1.7.0
9
+ numpy==1.23.5
10
+ opencv-python==4.5.5.64
11
+ soundfile==0.12.1
12
+ timm==1.0.12
13
+ transformers==4.47.0
14
+ wandb==0.19.0
samplers.py ADDED
@@ -0,0 +1,198 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import numpy as np
3
+
4
+
5
+ def expand_t_like_x(t, x_cur):
6
+ """Function to reshape time t to broadcastable dimension of x
7
+ Args:
8
+ t: [batch_dim,], time vector
9
+ x: [batch_dim,...], data point
10
+ """
11
+ dims = [1] * (len(x_cur.size()) - 1)
12
+ t = t.view(t.size(0), *dims)
13
+ return t
14
+
15
+ def get_score_from_velocity(vt, xt, t, path_type="linear"):
16
+ """Wrapper function: transfrom velocity prediction model to score
17
+ Args:
18
+ velocity: [batch_dim, ...] shaped tensor; velocity model output
19
+ x: [batch_dim, ...] shaped tensor; x_t data point
20
+ t: [batch_dim,] time tensor
21
+ """
22
+ t = expand_t_like_x(t, xt)
23
+ if path_type == "linear":
24
+ alpha_t, d_alpha_t = 1 - t, torch.ones_like(xt, device=xt.device) * -1
25
+ sigma_t, d_sigma_t = t, torch.ones_like(xt, device=xt.device)
26
+ elif path_type == "cosine":
27
+ alpha_t = torch.cos(t * np.pi / 2)
28
+ sigma_t = torch.sin(t * np.pi / 2)
29
+ d_alpha_t = -np.pi / 2 * torch.sin(t * np.pi / 2)
30
+ d_sigma_t = np.pi / 2 * torch.cos(t * np.pi / 2)
31
+ else:
32
+ raise NotImplementedError
33
+
34
+ mean = xt
35
+ reverse_alpha_ratio = alpha_t / d_alpha_t
36
+ var = sigma_t**2 - reverse_alpha_ratio * d_sigma_t * sigma_t
37
+ score = (reverse_alpha_ratio * vt - mean) / var
38
+
39
+ return score
40
+
41
+
42
+ def compute_diffusion(t_cur):
43
+ return 2 * t_cur
44
+
45
+
46
+ def euler_sampler(
47
+ model,
48
+ latents,
49
+ y,
50
+ context,
51
+ num_steps=20,
52
+ heun=False,
53
+ cfg_scale=1.0,
54
+ guidance_low=0.0,
55
+ guidance_high=1.0,
56
+ path_type="linear", # not used, just for compatability
57
+ ):
58
+ # setup conditioning
59
+ if cfg_scale > 1.0:
60
+ y_null = torch.zeros_like(y).to(y.device)
61
+ context_null = torch.zeros_like(context).to(context.device)
62
+ _dtype = latents.dtype
63
+ t_steps = torch.linspace(1, 0, num_steps+1, dtype=torch.bfloat16)
64
+ x_next = latents.to(torch.bfloat16)
65
+ device = x_next.device
66
+
67
+ with torch.no_grad():
68
+ for i, (t_cur, t_next) in enumerate(zip(t_steps[:-1], t_steps[1:])):
69
+ x_cur = x_next
70
+ if cfg_scale > 1.0 and t_cur <= guidance_high and t_cur >= guidance_low:
71
+ model_input = torch.cat([x_cur] * 2, dim=0)
72
+ y_cur = torch.cat([y, y_null], dim=0)
73
+ context_cur = torch.cat([context, context_null], dim=0)
74
+ else:
75
+ model_input = x_cur
76
+ y_cur = y
77
+ context_cur = context
78
+ do_guidance = (cfg_scale > 1.0 and t_cur <= guidance_high and t_cur >= guidance_low)
79
+ kwargs = dict(y=y_cur, context=context_cur, do_guidance=do_guidance)
80
+ time_input = torch.ones(model_input.size(0)).to(device=device, dtype=torch.bfloat16) * t_cur
81
+ d_cur = model(
82
+ model_input.to(dtype=_dtype), time_input.to(dtype=_dtype), **kwargs
83
+ )[0]
84
+ if cfg_scale > 1. and t_cur <= guidance_high and t_cur >= guidance_low:
85
+ d_cur_cond, d_cur_uncond = d_cur.chunk(2)
86
+ d_cur = d_cur_uncond + cfg_scale * (d_cur_cond - d_cur_uncond)
87
+ x_next = x_cur + (t_next - t_cur) * d_cur
88
+ if heun and (i < num_steps - 1):
89
+ if cfg_scale > 1.0 and t_cur <= guidance_high and t_cur >= guidance_low:
90
+ model_input = torch.cat([x_next] * 2)
91
+ y_cur = torch.cat([y, y_null], dim=0)
92
+ context_cur = torch.cat([context, context_null], dim=0)
93
+ else:
94
+ model_input = x_next
95
+ y_cur = y
96
+ context_cur = context
97
+ do_guidance = (cfg_scale > 1.0 and t_cur <= guidance_high and t_cur >= guidance_low)
98
+ kwargs = dict(y=y_cur, context=context_cur, do_guidance=do_guidance)
99
+ time_input = torch.ones(model_input.size(0)).to(
100
+ device=model_input.device, dtype=torch.bfloat16
101
+ ) * t_next
102
+ d_prime = model(
103
+ model_input.to(dtype=_dtype), time_input.to(dtype=_dtype), **kwargs
104
+ )[0]
105
+ if cfg_scale > 1.0 and t_cur <= guidance_high and t_cur >= guidance_low:
106
+ d_prime_cond, d_prime_uncond = d_prime.chunk(2)
107
+ d_prime = d_prime_uncond + cfg_scale * (d_prime_cond - d_prime_uncond)
108
+ x_next = x_cur + (t_next - t_cur) * (0.5 * d_cur + 0.5 * d_prime)
109
+
110
+ return x_next
111
+
112
+
113
+ def euler_maruyama_sampler(
114
+ model,
115
+ latents,
116
+ y,
117
+ context,
118
+ num_steps=20,
119
+ heun=False, # not used, just for compatability
120
+ cfg_scale=1.0,
121
+ guidance_low=0.0,
122
+ guidance_high=1.0,
123
+ path_type="linear",
124
+ ):
125
+ # setup conditioning
126
+ if cfg_scale > 1.0:
127
+ y_null = torch.zeros_like(y).to(y.device)
128
+ context_null = torch.zeros_like(context).to(context.device)
129
+
130
+ _dtype = latents.dtype
131
+ t_steps = torch.linspace(1., 0.04, num_steps, dtype=torch.bfloat16)
132
+ t_steps = torch.cat([t_steps, torch.tensor([0.], dtype=torch.bfloat16)])
133
+ x_next = latents.to(torch.bfloat16)
134
+ device = x_next.device
135
+
136
+ with torch.no_grad():
137
+ for i, (t_cur, t_next) in enumerate(zip(t_steps[:-2], t_steps[1:-1])):
138
+ dt = t_next - t_cur
139
+ x_cur = x_next
140
+ if cfg_scale > 1.0 and t_cur <= guidance_high and t_cur >= guidance_low:
141
+ model_input = torch.cat([x_cur] * 2, dim=0)
142
+ y_cur = torch.cat([y, y_null], dim=0)
143
+ context_cur = torch.cat([context, context_null], dim=0)
144
+ else:
145
+ model_input = x_cur
146
+ y_cur = y
147
+ context_cur = context
148
+ do_guidance = (cfg_scale > 1.0 and t_cur <= guidance_high and t_cur >= guidance_low)
149
+ kwargs = dict(y=y_cur, context=context_cur, do_guidance=do_guidance)
150
+ time_input = torch.ones(model_input.size(0)).to(device=device, dtype=torch.bfloat16) * t_cur
151
+ diffusion = compute_diffusion(t_cur)
152
+ eps_i = torch.randn_like(x_cur).to(device)
153
+ deps = eps_i * torch.sqrt(torch.abs(dt))
154
+
155
+ # compute drift
156
+ v_cur = model(
157
+ model_input.to(dtype=_dtype), time_input.to(dtype=_dtype), **kwargs
158
+ )[0]
159
+ s_cur = get_score_from_velocity(v_cur, model_input, time_input, path_type=path_type)
160
+ d_cur = v_cur - 0.5 * diffusion * s_cur
161
+ if cfg_scale > 1. and t_cur <= guidance_high and t_cur >= guidance_low:
162
+ d_cur_cond, d_cur_uncond = d_cur.chunk(2)
163
+ d_cur = d_cur_uncond + cfg_scale * (d_cur_cond - d_cur_uncond)
164
+
165
+ x_next = x_cur + d_cur * dt + torch.sqrt(diffusion) * deps
166
+
167
+ # last step
168
+ t_cur, t_next = t_steps[-2], t_steps[-1]
169
+ dt = t_next - t_cur
170
+ x_cur = x_next
171
+ if cfg_scale > 1.0 and t_cur <= guidance_high and t_cur >= guidance_low:
172
+ model_input = torch.cat([x_cur] * 2, dim=0)
173
+ y_cur = torch.cat([y, y_null], dim=0)
174
+ context_cur = torch.cat([context, context_null], dim=0)
175
+ else:
176
+ model_input = x_cur
177
+ y_cur = y
178
+ context_cur = context
179
+ do_guidance = (cfg_scale > 1.0 and t_cur <= guidance_high and t_cur >= guidance_low)
180
+ kwargs = dict(y=y_cur, context=context_cur, do_guidance=do_guidance)
181
+ time_input = torch.ones(model_input.size(0)).to(
182
+ device=device, dtype=torch.bfloat16
183
+ ) * t_cur
184
+
185
+ # compute drift
186
+ v_cur = model(
187
+ model_input.to(dtype=_dtype), time_input.to(dtype=_dtype), **kwargs
188
+ )[0]
189
+ s_cur = get_score_from_velocity(v_cur, model_input, time_input, path_type=path_type)
190
+ diffusion = compute_diffusion(t_cur)
191
+ d_cur = v_cur - 0.5 * diffusion * s_cur
192
+ if cfg_scale > 1. and t_cur <= guidance_high and t_cur >= guidance_low:
193
+ d_cur_cond, d_cur_uncond = d_cur.chunk(2)
194
+ d_cur = d_cur_uncond + cfg_scale * (d_cur_cond - d_cur_uncond)
195
+
196
+ mean_x = x_cur + dt * d_cur
197
+
198
+ return mean_x
train.py ADDED
@@ -0,0 +1,403 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import copy
3
+ from copy import deepcopy
4
+ import logging
5
+ import os
6
+ from pathlib import Path
7
+ from collections import OrderedDict
8
+ import json
9
+ import fairseq
10
+ import fairseq.utils
11
+ from dataclasses import dataclass
12
+
13
+ import torch
14
+ from tqdm.auto import tqdm
15
+ from torch.utils.data import DataLoader
16
+
17
+ from accelerate import Accelerator
18
+ from accelerate.logging import get_logger
19
+ from accelerate.utils import ProjectConfiguration, set_seed
20
+
21
+ from models import MMDiT
22
+ from loss import SILoss
23
+
24
+ from dataset import audio_video_spec_fullset_Dataset_Train, collate_fn_taro
25
+ from diffusers import AudioLDM2Pipeline
26
+ import wandb
27
+
28
+ logger = get_logger(__name__)
29
+
30
+ def prob_mask_like(shape, prob, device):
31
+ if prob == 1:
32
+ return torch.ones(shape, device = device, dtype = torch.bool)
33
+ elif prob == 0:
34
+ return torch.zeros(shape, device = device, dtype = torch.bool)
35
+ else:
36
+ return torch.zeros(shape, device = device).float().uniform_(0, 1) < prob
37
+
38
+
39
+ @torch.no_grad()
40
+ def sample_posterior(moments, latents_scale=1., latents_bias=0.):
41
+ mean, std = torch.chunk(moments, 2, dim=1)
42
+ z = mean + std * torch.randn_like(mean)
43
+ z = (z * latents_scale + latents_bias)
44
+ return z
45
+
46
+
47
+ @torch.no_grad()
48
+ def update_ema(ema_model, model, decay=0.9999):
49
+ """
50
+ Step the EMA model towards the current model.
51
+ """
52
+ ema_params = OrderedDict(ema_model.named_parameters())
53
+ model_params = OrderedDict(model.named_parameters())
54
+
55
+ for name, param in model_params.items():
56
+ name = name.replace("module.", "")
57
+ # TODO: Consider applying only to params that require_grad to avoid small numerical changes of pos_embed
58
+ ema_params[name].mul_(decay).add_(param.data, alpha=1 - decay)
59
+
60
+
61
+ def create_logger(logging_dir):
62
+ """
63
+ Create a logger that writes to a log file and stdout.
64
+ """
65
+ logging.basicConfig(
66
+ level=logging.INFO,
67
+ format='[\033[34m%(asctime)s\033[0m] %(message)s',
68
+ datefmt='%Y-%m-%d %H:%M:%S',
69
+ handlers=[logging.StreamHandler(), logging.FileHandler(f"{logging_dir}/log.txt")]
70
+ )
71
+ logger = logging.getLogger(__name__)
72
+ return logger
73
+
74
+ @dataclass
75
+ class UserDirModule:
76
+ user_dir: str
77
+
78
+ def requires_grad(model, flag=True):
79
+ """
80
+ Set requires_grad flag for all parameters in a model.
81
+ """
82
+ for p in model.parameters():
83
+ p.requires_grad = flag
84
+
85
+
86
+ #################################################################################
87
+ # Training Loop #
88
+ #################################################################################
89
+
90
+ def main(args):
91
+ # set accelerator
92
+ logging_dir = Path(args.output_dir, args.logging_dir)
93
+ accelerator_project_config = ProjectConfiguration(
94
+ project_dir=args.output_dir, logging_dir=logging_dir
95
+ )
96
+
97
+ accelerator = Accelerator(
98
+ gradient_accumulation_steps=args.gradient_accumulation_steps,
99
+ mixed_precision=args.mixed_precision,
100
+ log_with=args.report_to,
101
+ project_config=accelerator_project_config,
102
+ )
103
+
104
+ if accelerator.is_main_process:
105
+ os.makedirs(args.output_dir, exist_ok=True) # Make results folder (holds all experiment subfolders)
106
+ save_dir = os.path.join(args.output_dir, args.exp_name)
107
+ os.makedirs(save_dir, exist_ok=True)
108
+ args_dict = vars(args)
109
+ # Save to a JSON file
110
+ json_dir = os.path.join(save_dir, "args.json")
111
+ with open(json_dir, 'w') as f:
112
+ json.dump(args_dict, f, indent=4)
113
+ checkpoint_dir = f"{save_dir}/checkpoints" # Stores saved model checkpoints
114
+ os.makedirs(checkpoint_dir, exist_ok=True)
115
+ logger = create_logger(save_dir)
116
+ logger.info(f"Experiment directory created at {save_dir}")
117
+ device = accelerator.device
118
+ if torch.backends.mps.is_available():
119
+ accelerator.native_amp = False
120
+ if args.seed is not None:
121
+ set_seed(args.seed + accelerator.process_index)
122
+
123
+ # Create model:
124
+ assert args.resolution % 8 == 0, "Image size must be divisible by 8 (for the VAE encoder)."
125
+
126
+ if args.enc_type == "eat-base":
127
+ model_dir = 'EAT'
128
+ model_path = UserDirModule(model_dir)
129
+ fairseq.utils.import_user_module(model_path)
130
+ checkpoint_dir_eat = "/home/tton/workspace/SiT_Foley/audio_encoder/EAT-base_epoch30_pt.pt"
131
+ model_eat, cfg_eat, task_eat = fairseq.checkpoint_utils.load_model_ensemble_and_task([checkpoint_dir_eat])
132
+ model_eat = model_eat[0].to(device)
133
+ encoders, encoder_types, architectures = [model_eat], ['eat-base'], ['vit']
134
+ z_dims = [768]
135
+ elif args.enc_type == "None":
136
+ encoders, encoder_types, architectures = [None], [None], [None]
137
+ z_dims = [0]
138
+
139
+ model = MMDiT(
140
+ adm_in_channels=120,
141
+ z_dims = z_dims,
142
+ encoder_depth=args.encoder_depth,
143
+ )
144
+
145
+ model = model.to(device)
146
+ ema = deepcopy(model).to(device) # Create an EMA of the model for use after training
147
+ requires_grad(ema, False)
148
+ model_audioldm = AudioLDM2Pipeline.from_pretrained("cvssp/audioldm2")
149
+ vae = model_audioldm.vae.to(device)
150
+ vae.eval()
151
+ for param in vae.parameters():
152
+ param.requires_grad = False
153
+
154
+ scale_factor = 0.18215
155
+ latents_scale = scale_factor
156
+ latents_bias = 0.
157
+
158
+ # create loss function
159
+ loss_fn = SILoss(
160
+ prediction=args.prediction,
161
+ path_type=args.path_type,
162
+ encoders=encoders,
163
+ accelerator=accelerator,
164
+ latents_scale=latents_scale,
165
+ latents_bias=latents_bias,
166
+ weighting=args.weighting
167
+ )
168
+ if accelerator.is_main_process:
169
+ logger.info(f"SiT Parameters: {sum(p.numel() for p in model.parameters()):,}")
170
+
171
+ # Setup optimizer (we used default Adam betas=(0.9, 0.999) and a constant learning rate of 1e-4 in our paper):
172
+ if args.allow_tf32:
173
+ torch.backends.cuda.matmul.allow_tf32 = True
174
+ torch.backends.cudnn.allow_tf32 = True
175
+
176
+ optimizer = torch.optim.AdamW(
177
+ model.parameters(),
178
+ lr=args.learning_rate,
179
+ betas=(args.adam_beta1, args.adam_beta2),
180
+ weight_decay=args.adam_weight_decay,
181
+ eps=args.adam_epsilon,
182
+ )
183
+
184
+ # Setup data:
185
+ train_dataset = audio_video_spec_fullset_Dataset_Train(args.data_dir)
186
+ local_batch_size = int(args.batch_size // accelerator.num_processes)
187
+ train_dataloader = DataLoader(
188
+ train_dataset,
189
+ batch_size=local_batch_size,
190
+ shuffle=True,
191
+ num_workers=args.num_workers,
192
+ pin_memory=True,
193
+ drop_last=True,
194
+ collate_fn=collate_fn_taro,
195
+ )
196
+ if accelerator.is_main_process:
197
+ logger.info(f"Dataset contains {len(train_dataset):,} images ({args.data_dir})")
198
+
199
+ # Prepare models for training:
200
+ update_ema(ema, model, decay=0) # Ensure EMA is initialized with synced weights
201
+ model.train() # important! This enables embedding dropout for classifier-free guidance
202
+ ema.eval() # EMA model should always be in eval mode
203
+
204
+ # resume:
205
+ global_step = 0
206
+ if args.resume_step > 0:
207
+ ckpt_name = str(args.resume_step).zfill(7) +'.pt'
208
+ ckpt = torch.load(
209
+ f'{os.path.join(args.output_dir, args.exp_name)}/checkpoints/{ckpt_name}',
210
+ map_location='cpu',
211
+ )
212
+ model.load_state_dict(ckpt['model'])
213
+ ema.load_state_dict(ckpt['ema'])
214
+ optimizer.load_state_dict(ckpt['opt'])
215
+ global_step = ckpt['steps']
216
+
217
+ model, optimizer, train_dataloader = accelerator.prepare(
218
+ model, optimizer, train_dataloader
219
+ )
220
+
221
+ if accelerator.is_main_process:
222
+ tracker_config = vars(copy.deepcopy(args))
223
+ accelerator.init_trackers(
224
+ project_name="REPA",
225
+ config=tracker_config,
226
+ init_kwargs={
227
+ "wandb": {"name": f"{args.exp_name}"}
228
+ },
229
+ )
230
+
231
+ progress_bar = tqdm(
232
+ range(0, args.max_train_steps),
233
+ initial=global_step,
234
+ desc="Steps",
235
+ # Only show the progress bar once on each machine.
236
+ disable=not accelerator.is_local_main_process,
237
+ )
238
+
239
+ # Labels to condition the model with (feel free to change):
240
+ sample_batch_size = 64 // accelerator.num_processes
241
+ batch_gt_xs = next(iter(train_dataloader))
242
+ gt_xs = batch_gt_xs["mix_spec"]
243
+
244
+ with torch.no_grad():
245
+ gt_xs = gt_xs[:sample_batch_size]
246
+ encoder_posterior_gt_xs = vae.encode(gt_xs.to(device))[0]
247
+ gt_xs = encoder_posterior_gt_xs.sample() * scale_factor
248
+ ys = batch_gt_xs["mix_onset"][:sample_batch_size]
249
+ ys = ys.to(device)
250
+ contexts = batch_gt_xs["mix_video_feat"][:sample_batch_size]
251
+ contexts = contexts.to(device)
252
+
253
+ for epoch in range(args.epochs):
254
+ model.train()
255
+ for batch_idx in train_dataloader:
256
+ raw_spec = batch_idx["mix_spec"]
257
+ context = batch_idx["mix_video_feat"]
258
+ y = batch_idx["mix_onset"]
259
+ with torch.no_grad():
260
+ raw_spec = raw_spec.permute(0, 1, 3, 2)
261
+ encoder_posterior = vae.encode(raw_spec.to(device), return_dict=True)[0]
262
+ x = encoder_posterior.sample() * scale_factor
263
+ raw_image = batch_idx["mix_fbank"]
264
+
265
+ x = x.squeeze(dim=1).to(device)
266
+ context = context.to(device)
267
+ y = y.to(device)
268
+ z = None
269
+ labels = context
270
+ with torch.no_grad():
271
+ zs = []
272
+ with accelerator.autocast():
273
+ for encoder, encoder_type, arch in zip(encoders, encoder_types, architectures):
274
+ if encoder is not None:
275
+ raw_image = raw_image[:, :834].unsqueeze(1)
276
+ z = encoder.extract_features(raw_image, padding_mask=None,mask=False, remove_extra_tokens=True)['x'] # B x 416 x 768
277
+ zs.append(z)
278
+
279
+ with accelerator.accumulate(model):
280
+ model_kwargs = dict(context=labels, y=y)
281
+ loss, proj_loss = loss_fn(model, x, model_kwargs, zs=zs)
282
+ loss_mean = loss.mean()
283
+ if len(zs) > 0:
284
+ proj_loss_mean = proj_loss.mean()
285
+ loss = loss_mean + proj_loss_mean * args.proj_coeff
286
+ else:
287
+ proj_loss_mean = torch.tensor(0., device=device)
288
+ loss = loss_mean
289
+
290
+ ## optimization
291
+ accelerator.backward(loss)
292
+ if accelerator.sync_gradients:
293
+ params_to_clip = model.parameters()
294
+ grad_norm = accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm)
295
+ optimizer.step()
296
+ optimizer.zero_grad(set_to_none=True)
297
+
298
+ if accelerator.sync_gradients:
299
+ update_ema(ema, model) # change ema function
300
+
301
+ ### enter
302
+ if accelerator.sync_gradients:
303
+ progress_bar.update(1)
304
+ global_step += 1
305
+ if global_step % args.checkpointing_steps == 0 and global_step > 0:
306
+ if accelerator.is_main_process:
307
+ checkpoint = {
308
+ "model": model.state_dict(),
309
+ "ema": ema.state_dict(),
310
+ "opt": optimizer.state_dict(),
311
+ "args": args,
312
+ "steps": global_step,
313
+ }
314
+ checkpoint_path = f"{checkpoint_dir}/{global_step:07d}.pt"
315
+ torch.save(checkpoint, checkpoint_path)
316
+ logger.info(f"Saved checkpoint to {checkpoint_path}")
317
+
318
+ logs = {
319
+ "loss": accelerator.gather(loss_mean).mean().detach().item(),
320
+ "proj_loss": accelerator.gather(proj_loss_mean).mean().detach().item(),
321
+ "grad_norm": accelerator.gather(grad_norm).mean().detach().item()
322
+ }
323
+ progress_bar.set_postfix(**logs)
324
+ accelerator.log(logs, step=global_step)
325
+
326
+ if global_step >= args.max_train_steps:
327
+ break
328
+ if global_step >= args.max_train_steps:
329
+ break
330
+
331
+ model.eval() # important! This disables randomized embedding dropout
332
+ # do any sampling/FID calculation/etc. with ema (or model) in eval mode ...
333
+
334
+ accelerator.wait_for_everyone()
335
+ if accelerator.is_main_process:
336
+ logger.info("Done!")
337
+ accelerator.end_training()
338
+
339
+ def parse_args(input_args=None):
340
+ parser = argparse.ArgumentParser(description="Training")
341
+
342
+ # logging:
343
+ parser.add_argument("--output-dir", type=str, default="exps")
344
+ parser.add_argument("--exp-name", type=str, required=True)
345
+ parser.add_argument("--logging-dir", type=str, default="logs")
346
+ parser.add_argument("--report-to", type=str, default="wandb")
347
+ parser.add_argument("--sampling-steps", type=int, default=10000)
348
+ parser.add_argument("--resume-step", type=int, default=0)
349
+
350
+ # model
351
+ parser.add_argument("--model", type=str)
352
+ parser.add_argument("--num-classes", type=int, default=1000)
353
+ parser.add_argument("--encoder-depth", type=int, default=8)
354
+ parser.add_argument("--fused-attn", action=argparse.BooleanOptionalAction, default=True)
355
+ parser.add_argument("--qk-norm", action=argparse.BooleanOptionalAction, default=False)
356
+
357
+ # dataset
358
+ parser.add_argument("--data-dir", type=str, default="/home/tton/tton_data/data/VGGSound")
359
+ parser.add_argument("--resolution", type=int, choices=[256], default=256)
360
+ parser.add_argument("--batch-size", type=int, default=64)
361
+
362
+ # precision
363
+ parser.add_argument("--allow-tf32", action="store_true")
364
+ parser.add_argument("--mixed-precision", type=str, default="fp16", choices=["no", "fp16", "bf16"])
365
+
366
+ # optimization
367
+ parser.add_argument("--epochs", type=int, default=1400)
368
+ parser.add_argument("--max-train-steps", type=int, default=1000000)
369
+ parser.add_argument("--checkpointing-steps", type=int, default=50000)
370
+ parser.add_argument("--gradient-accumulation-steps", type=int, default=1)
371
+ parser.add_argument("--learning-rate", type=float, default=1e-4)
372
+ parser.add_argument("--adam-beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.")
373
+ parser.add_argument("--adam-beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.")
374
+ parser.add_argument("--adam-weight-decay", type=float, default=0., help="Weight decay to use.")
375
+ parser.add_argument("--adam-epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer")
376
+ parser.add_argument("--max-grad-norm", default=1.0, type=float, help="Max gradient norm.")
377
+
378
+ # seed
379
+ parser.add_argument("--seed", type=int, default=0)
380
+
381
+ # cpu
382
+ parser.add_argument("--num-workers", type=int, default=4)
383
+
384
+ # loss
385
+ parser.add_argument("--path-type", type=str, default="linear", choices=["linear", "cosine"])
386
+ parser.add_argument("--prediction", type=str, default="v", choices=["v"]) # currently we only support v-prediction
387
+ parser.add_argument("--cfg-prob", type=float, default=0.1)
388
+ parser.add_argument("--enc-type", type=str, default='dinov2-vit-b')
389
+ parser.add_argument("--proj-coeff", type=float, default=0.5)
390
+ parser.add_argument("--weighting", default="uniform", type=str, help="Max gradient norm.")
391
+ parser.add_argument("--legacy", action=argparse.BooleanOptionalAction, default=False)
392
+
393
+ if input_args is not None:
394
+ args = parser.parse_args(input_args)
395
+ else:
396
+ args = parser.parse_args()
397
+
398
+ return args
399
+
400
+ if __name__ == "__main__":
401
+ args = parse_args()
402
+
403
+ main(args)
train.sh ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ CUDA_VISIBLE_DEVICES=0 accelerate launch train.py \
2
+ --report-to="wandb" \
3
+ --allow-tf32 \
4
+ --mixed-precision="fp16" \
5
+ --seed=0 \
6
+ --path-type="linear" \
7
+ --prediction="v" \
8
+ --weighting="uniform" \
9
+ --model="SiT-B/2" \
10
+ --enc-type="eat-base" \
11
+ --proj-coeff=0.5 \
12
+ --encoder-depth=4 \
13
+ --output-dir="exps" \
14
+ --exp-name="taro-output" \
15
+ --data-dir="./VGGSound" \