yasserTII commited on
Commit
f742fc3
1 Parent(s): eaf9dee

Upload folder using huggingface_hub

Browse files
Files changed (6) hide show
  1. README.md +98 -0
  2. crop_videos.py +105 -0
  3. requirements.txt +6 -0
  4. train.tar.gz +3 -0
  5. utils.py +262 -0
  6. visper_stats.png +3 -0
README.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ This repository contains **ViSpeR**, a large-scale dataset for Visual Speech Recognition for Arabic, Chinese, French and Spanish.
2
+
3
+ ## Dataset Summary:
4
+
5
+ Given the scarcity of publicly available VSR data for non-English languages, we collected VSR data for the most four spoken languages at scale.
6
+
7
+
8
+ Comparison of VSR datasets. Our proposed ViSpeR dataset is larger in size compared to other datasets that cover non-English languages for the VSR task. For our dataset, the numbers in parenthesis denote the number of clips. We also give the clip coverage under TedX and Wild subsets of our ViSpeR dataset.
9
+ ![Lip2Vec Illustration](visper_stats.png)
10
+
11
+
12
+ ## Downloading the data:
13
+
14
+ First, use the langauge.json to download the videos and put them in seperate folders. The raw data should be structured as follows:
15
+ ```bash
16
+ Data/
17
+ ├── Chinese/
18
+ │ ├── video_id.mp4
19
+ │ └── ...
20
+ ├── Arabic/
21
+ │ ├── video_id.mp4
22
+ │ └── ...
23
+ ├── French/
24
+ │ ├── video_id.mp4
25
+ │ └── ...
26
+ ├── Spanish/
27
+ │ ├── video_id.mp4
28
+ │ └── ...
29
+
30
+ ```
31
+
32
+ ## Setup:
33
+
34
+ 1- Setup the environement:
35
+ ```bash
36
+ conda create --name visper python=3.10
37
+ conda activate visper
38
+ pip install -r requirements.txt
39
+ ```
40
+
41
+ 2- Install ffmpeg:
42
+ ```bash
43
+ conda install "ffmpeg<5" -c conda-forge
44
+ ```
45
+
46
+ ## Processing the data:
47
+
48
+ Then, use the provided metadata to process the raw data for creating the ViSpeR dataset. You can use the ```crop_videos.py``` to process the data, note that all clips are cropped and transformed
49
+
50
+ python crop_videos.py --video_dir [path_to_data_language] --save_path [save_path_language] --json [language_metadata.json] --use_ffmpeg True
51
+ ```
52
+
53
+ ```bash
54
+ ViSpeR/
55
+ ├── Chinese/
56
+ │ ├── video_id/
57
+ │ │ │── 00001.mp4
58
+ │ │ │── 00001.json
59
+ │ └── ...
60
+ ├── Arabic/
61
+ │ ├── video_id/
62
+ │ │ │── 00001.mp4
63
+ │ │ │── 00001.json
64
+ │ └── ...
65
+ ├── French/
66
+ │ ├── video_id/
67
+ │ │ │── 00001.mp4
68
+ │ │ │── 00001.json
69
+ │ └── ...
70
+ ├── Spanish/
71
+ │ ├── video_id/
72
+ │ │ │── 00001.mp4
73
+ │ │ │── 00001.json
74
+ │ └── ...
75
+
76
+ ```
77
+
78
+ The ```video_id/xxxx.json``` has the 'label' of the corresponding video ```video_id/xxxx.mp4```.
79
+
80
+
81
+
82
+ ## Intended Use
83
+
84
+ This dataset can be used to train models for visual speech recognition. It's particularly useful for research and development purposes in the field of audio-visual content processing. The data can be used to assess the performance of current and future models.
85
+
86
+ ## Limitations and Biases
87
+ Due to the data collection process focusing on YouTube, biases inherent to the platform may be present in the dataset. Also, while measures are taken to ensure diversity in content, the dataset might still be skewed towards certain types of content due to the filtering process.
88
+
89
+
90
+ ## Citation
91
+ ```bash
92
+ @article{djilali2023vsr,
93
+ title={Do VSR Models Generalize Beyond LRS3?},
94
+ author={Djilali, Yasser Abdelaziz Dahou and Narayan, Sanath and Bihan, Eustache Le and Boussaid, Haithem and Almazrouei, Ebtessam and Debbah, Merouane},
95
+ journal={arXiv preprint arXiv:2311.14063},
96
+ year={2023}
97
+ }
98
+ ```
crop_videos.py ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import numpy as np
3
+ import os
4
+ from tqdm import tqdm
5
+ import subprocess
6
+ from glob import glob
7
+ import argparse
8
+ import time
9
+ from utils import crop_video, crop_face, write_video, crop_and_save_audio
10
+ from concurrent.futures import ProcessPoolExecutor, ThreadPoolExecutor, as_completed
11
+ import sys
12
+
13
+
14
+ '''
15
+ Crop the untrimmed videos into multiple clips using corresponding start and end times, bounding boxes and face landmarks.
16
+
17
+ Usage:
18
+ python crop_videos.py --video_dir /path/to/25-fps-videos --save_path /path/to/save/the/clips --json /path/to/json/file
19
+
20
+ To save videos using ffmpeg, add "--use_ffmpeg True". This takes additional time but saves disk space.
21
+
22
+ To additionally save audio as separate wav files, add "--save_audio True"
23
+
24
+ To merge audio with video and save as a single mp4, add "--merge_audio True"
25
+ '''
26
+
27
+
28
+ def write_clip(metadata, vid_p, args):
29
+ '''
30
+ param metadata: dict containing start, end, bounding boxes, landmarks
31
+ param vid_p: path to original untrimmed video at 25fps
32
+ param args: main args
33
+ '''
34
+ for k, clip in enumerate(metadata):
35
+ # get the clip frames and corresponding landmarks
36
+ video, landmarks = crop_video(vid_p, clip)
37
+ # get the cropped sequence around the mouth using the landmarks
38
+ crop_seq = crop_face(video, landmarks)
39
+ save_video_path = os.path.join(args.save_path, 'videos', vid_p.split('/')[-1][:-4], f'{str(k).zfill(5)}.mp4')
40
+ save_audio_path = save_video_path.replace('.mp4','.wav')
41
+ # get the audio part of the clip
42
+ if args.save_audio or args.merge_audio:
43
+ crop_and_save_audio(vid_p, save_audio_path, clip['start'], clip['end'])
44
+ # write clip to disk
45
+ write_video(save_video_path, crop_seq, save_audio_path, merge_audio=args.merge_audio, use_ffmpeg=args.use_ffmpeg)
46
+ return
47
+
48
+
49
+ def main(args):
50
+ savepath = args.save_path
51
+ json_path = args.json
52
+ vid_dir = args.video_dir
53
+
54
+ video_list = glob(os.path.join(vid_dir, '25_fps_videos*', '*.mp4'))
55
+ print(f'Loading json file {json_path}')
56
+ data = json.load(open(json_path,'r'))
57
+ print(f'Total number of videos {len(video_list)}. Json length {len(data)}')
58
+ video_ids = list(data.keys())
59
+ count_clips = 0
60
+
61
+ futures = []
62
+ writer_str = 'Ffmpeg' if args.use_ffmpeg else 'cv2.VideoWriter'
63
+ print(f'Using {writer_str} to save the cropped clips.')
64
+
65
+ with tqdm(total=len(video_ids), file=sys.stdout) as progress:
66
+ with ProcessPoolExecutor() as executor:
67
+ for z in video_ids:
68
+ idx = [k for k, i in enumerate(video_list) if z in i]
69
+ metadata = data[z]
70
+ vid_p = video_list[idx[0]]
71
+ os.makedirs(os.path.join(savepath, 'videos', vid_p.split('/')[-1][:-4]), exist_ok=True)
72
+ future = executor.submit(write_clip, metadata, vid_p, args)
73
+ futures.append(future)
74
+
75
+ for _ in as_completed(futures):
76
+ progress.update()
77
+
78
+ print(f'Cropping videos completed.')
79
+ print(f'Getting the labels.')
80
+ labels = {}
81
+ for z in tqdm(video_ids):
82
+ metadata = data[z]
83
+ for k, clip in enumerate(metadata):
84
+ labk = clip['label']
85
+ fi = os.path.join('videos', vid_p.split('/')[-1][:-4], f'{str(k).zfill(5)}.mp4')
86
+ labels[fi] = labk
87
+ label_file = f'{args.save_path}/labels.json'
88
+ with open(label_file, 'w', encoding='utf-8') as f:
89
+ json.dump(labels, f)
90
+
91
+
92
+
93
+ if __name__ == "__main__":
94
+ parser = argparse.ArgumentParser(description='Vhisper crop videos')
95
+ parser.add_argument('--save_path', type=str, default='', help='Path for saving.')
96
+ parser.add_argument('--json', type=str, default='', help='Json path')
97
+ parser.add_argument('--video_dir', type=str, default='', help='Path to directory where original videos are stored.')
98
+ parser.add_argument('--save_audio', type=bool, default=False, help='Whether to save audio info.')
99
+ parser.add_argument('--merge_audio', type=bool, default=False, help='Whether to merge audio with the video when saving.')
100
+ parser.add_argument('--use_ffmpeg', type=bool, default=False, help='Whether to use ffmpeg instead of cv2 for saving the video.')
101
+
102
+ args = parser.parse_args()
103
+ tic = time.time()
104
+ main(args)
105
+ print(f'Elpased total time for processing: {time.time()-tic} seconds')
requirements.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ python-speech-features==0.6
2
+ scipy==1.10.0
3
+ opencv-python==4.5.4.60
4
+ sentencepiece==0.1.96
5
+ editdistance==0.6.0
6
+ numpy
train.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e089b2acb4e243c1ffa8f98657af6e731d390dbcfffeb2f6e8a0127401797f2f
3
+ size 39272017920
utils.py ADDED
@@ -0,0 +1,262 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import cv2
2
+ import numpy as np
3
+ from scipy import signal
4
+ import os
5
+ import subprocess
6
+ import tempfile
7
+
8
+ CROP_SCALE = 0.4
9
+ WINDOW_MARGIN = 12
10
+ START_IDX, STOP_IDX = 3, 5
11
+ STABLE_POINTS = (36, 45, 33, 48, 54)
12
+ CROP_HEIGHT, CROP_WIDTH = 96, 96
13
+
14
+ # PATH='/home/users/u100438/home200093/dataset_release/'
15
+ REFERENCE = np.load(os.path.join( os.path.dirname(__file__), '20words_mean_face.npy'))
16
+
17
+
18
+ def crop_and_save_audio(mp4_path: str, saving_path:str, start_audio: float, end_audio: float) -> None:
19
+ """
20
+ Crops original audio corresponding to the start and end time.
21
+ Saves it as wav file with single channel and 16kHz sampling rate.
22
+
23
+ :param mp4_path: str, path to original video.
24
+ :param saving_path: str, path where audio will be saved. SHOULD END WITH .wav
25
+ :param start_audio: float, start time of clip in seconds
26
+ :param end_audio: float, end time of clip in seconds
27
+ :return: None.
28
+ """
29
+
30
+ # write audio.
31
+ command = f"ffmpeg -loglevel error -y -i {mp4_path} -ss {start_audio} -to {end_audio} -vn -acodec pcm_s16le -ar 16000 -ac 1 {saving_path}"
32
+ subprocess.call(command, shell=True)
33
+
34
+
35
+
36
+ def crop_video(vid_path: str, clip_data: dict):
37
+ '''
38
+ Reads the video frames of video (in vid_path) between clip_data['start'] and clip_data['end'] times.
39
+ Crops the faces in these frames using bounding boxes given by clip_data['bboxs']
40
+ Returns sequence of faces and clip['landmarks'] aligned to 224x224 resolution.
41
+ '''
42
+ cap = cv2.VideoCapture(vid_path)
43
+
44
+ frame_width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
45
+ frame_height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
46
+ num_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
47
+ start_frame, end_frame = round(clip_data['start']*25), round(clip_data['end']*25)
48
+ clip_frames = end_frame - start_frame
49
+ assert end_frame <= num_frames, f'End frame ({end_frame}) exceeds total number of frames ({num_frames})'
50
+
51
+ landmarks_n, bboxs_n = np.array(clip_data['landmarks']), np.array(clip_data['bboxs'])
52
+ bboxs = np.multiply(bboxs_n, [frame_width, frame_height, frame_width, frame_height])
53
+ landmarks = np.multiply(landmarks_n, [frame_width, frame_height])
54
+ assert len(landmarks) == clip_frames, f'Landmarks length ({len(landmarks)}) does not match the number of frames in the clip ({clip_frames})'
55
+
56
+ dets = {'x':[], 'y':[], 's':[]}
57
+ for det in bboxs:
58
+ dets['s'].append(max((det[3]-det[1]),(det[2]-det[0]))/2)
59
+ dets['y'].append((det[1]+det[3])/2) # crop center x
60
+ dets['x'].append((det[0]+det[2])/2) # crop center y
61
+
62
+ # Smooth detections
63
+ dets['s'] = signal.medfilt(dets['s'],kernel_size=13)
64
+ dets['x'] = signal.medfilt(dets['x'],kernel_size=13)
65
+ dets['y'] = signal.medfilt(dets['y'],kernel_size=13)
66
+
67
+ image_seq = []
68
+ current_frame = start_frame
69
+ cap.set(cv2.CAP_PROP_POS_FRAMES, start_frame)
70
+
71
+ while current_frame < end_frame:
72
+ ret, frame = cap.read()
73
+ count = current_frame - start_frame
74
+ current_frame += 1
75
+
76
+ if not ret:
77
+ break
78
+
79
+ bs = dets['s'][count] # Detection box size
80
+ bsi = int(bs*(1+2*CROP_SCALE)) # Pad videos by this amount
81
+
82
+ image = frame
83
+ lands = landmarks[count]
84
+
85
+ frame_ = np.pad(image,((bsi,bsi),(bsi,bsi),(0,0)), 'constant', constant_values=(110,110))
86
+ my = dets['y'][count]+bsi # BBox center Y
87
+ mx = dets['x'][count]+bsi # BBox center X
88
+
89
+ face = frame_[int(my-bs):int(my+bs*(1+2*CROP_SCALE)),int(mx-bs*(1+CROP_SCALE)):int(mx+bs*(1+CROP_SCALE))]
90
+
91
+ ## lands translation and scaling
92
+ lands[:,0] -= int(mx-bs*(1+CROP_SCALE) - bsi)
93
+ lands[:,1] -= int(my - bs - bsi)
94
+ lands[:,0] *= (224/face.shape[1])
95
+ lands[:,1] *= (224/face.shape[0])
96
+
97
+ image_seq.append(cv2.resize(face,(224,224)))
98
+
99
+ image_seq = np.array(image_seq)
100
+
101
+ return image_seq, landmarks
102
+
103
+
104
+
105
+ def landmarks_interpolate(landmarks):
106
+ """landmarks_interpolate.
107
+
108
+ :param landmarks: List, the raw landmark (in-place)
109
+ """
110
+ valid_frames_idx = [idx for idx, _ in enumerate(landmarks) if _ is not None]
111
+ if not valid_frames_idx:
112
+ return None
113
+ for idx in range(1, len(valid_frames_idx)):
114
+ if valid_frames_idx[idx] - valid_frames_idx[idx-1] == 1:
115
+ continue
116
+ else:
117
+ landmarks = linear_interpolate(landmarks, valid_frames_idx[idx-1], valid_frames_idx[idx])
118
+ valid_frames_idx = [idx for idx, _ in enumerate(landmarks) if _ is not None]
119
+ # -- Corner case: keep frames at the beginning or at the end failed to be detected.
120
+ if valid_frames_idx:
121
+ landmarks[:valid_frames_idx[0]] = [landmarks[valid_frames_idx[0]]] * valid_frames_idx[0]
122
+ landmarks[valid_frames_idx[-1]:] = [landmarks[valid_frames_idx[-1]]] * (len(landmarks) - valid_frames_idx[-1])
123
+ valid_frames_idx = [idx for idx, _ in enumerate(landmarks) if _ is not None]
124
+ assert len(valid_frames_idx) == len(landmarks), "not every frame has landmark"
125
+ return landmarks
126
+
127
+
128
+ def crop_patch(image_seq, landmarks):
129
+ """crop_patch.
130
+
131
+ :param video_pathname: str, the filename for the processed video.
132
+ :param landmarks: List, the interpolated landmarks.
133
+ """
134
+ frame_idx = 0
135
+ sequence = []
136
+ for frame in image_seq:
137
+
138
+ window_margin = min(WINDOW_MARGIN // 2, frame_idx, len(landmarks) - 1 - frame_idx)
139
+ smoothed_landmarks = np.mean([landmarks[x] for x in range(frame_idx - window_margin, frame_idx + window_margin + 1)], axis=0)
140
+ smoothed_landmarks += landmarks[frame_idx].mean(axis=0) - smoothed_landmarks.mean(axis=0)
141
+ transformed_frame, transformed_landmarks = affine_transform(frame, smoothed_landmarks, REFERENCE)
142
+ sequence.append( cut_patch( transformed_frame, transformed_landmarks[START_IDX : STOP_IDX], CROP_HEIGHT//2, CROP_WIDTH//2,))
143
+ frame_idx += 1
144
+
145
+ return np.array(sequence)
146
+
147
+ def affine_transform(frame, landmarks, reference,
148
+ target_size=(256, 256),
149
+ reference_size=(256, 256),
150
+ stable_points=STABLE_POINTS,
151
+ interpolation=cv2.INTER_LINEAR,
152
+ border_mode=cv2.BORDER_CONSTANT,
153
+ border_value=0
154
+ ):
155
+ """affine_transform.
156
+
157
+ :param frame: numpy.array, the input sequence.
158
+ :param landmarks: List, the tracked landmarks.
159
+ :param reference: numpy.array, the neutral reference frame.
160
+ :param target_size: tuple, size of the output image.
161
+ :param reference_size: tuple, size of the neural reference frame.
162
+ :param stable_points: tuple, landmark idx for the stable points.
163
+ :param interpolation: interpolation method to be used.
164
+ :param border_mode: Pixel extrapolation method .
165
+ :param border_value: Value used in case of a constant border. By default, it is 0.
166
+ """
167
+
168
+ lands = [landmarks[x] for x in range(5)]
169
+
170
+ stable_reference = np.vstack([reference[x] for x in stable_points])
171
+ stable_reference[:, 0] -= (reference_size[0] - target_size[0]) / 2.0
172
+ stable_reference[:, 1] -= (reference_size[1] - target_size[1]) / 2.0
173
+
174
+ # Warp the face patch and the landmarks
175
+ transform = cv2.estimateAffinePartial2D(np.vstack(lands), stable_reference, method=cv2.LMEDS)[0]
176
+ transformed_frame = cv2.warpAffine(
177
+ frame,
178
+ transform,
179
+ dsize=(target_size[0], target_size[1]),
180
+ flags=interpolation,
181
+ borderMode=border_mode,
182
+ borderValue=border_value,
183
+ )
184
+ transformed_landmarks = np.matmul(landmarks, transform[:, :2].transpose()) + transform[:, 2].transpose()
185
+
186
+ return transformed_frame, transformed_landmarks
187
+
188
+
189
+ def cut_patch(img, landmarks, height, width, threshold=5):
190
+ """cut_patch.
191
+
192
+ :param img: ndarray, an input image.
193
+ :param landmarks: ndarray, the corresponding landmarks for the input image.
194
+ :param height: int, the distance from the centre to the side of of a bounding box.
195
+ :param width: int, the distance from the centre to the side of of a bounding box.
196
+ :param threshold: int, the threshold from the centre of a bounding box to the side of image.
197
+ """
198
+ center_x, center_y = np.mean(landmarks, axis=0)
199
+
200
+ if center_y - height < 0:
201
+ center_y = height
202
+ if center_y - height < 0 - threshold:
203
+ raise Exception('too much bias in height')
204
+ if center_x - width < 0:
205
+ center_x = width
206
+ if center_x - width < 0 - threshold:
207
+ raise Exception('too much bias in width')
208
+
209
+ if center_y + height > img.shape[0]:
210
+ center_y = img.shape[0] - height
211
+ if center_y + height > img.shape[0] + threshold:
212
+ raise Exception('too much bias in height')
213
+ if center_x + width > img.shape[1]:
214
+ center_x = img.shape[1] - width
215
+ if center_x + width > img.shape[1] + threshold:
216
+ raise Exception('too much bias in width')
217
+
218
+ cutted_img = np.copy(img[ int(round(center_y) - round(height)): int(round(center_y) + round(height)),
219
+ int(round(center_x) - round(width)): int(round(center_x) + round(width))])
220
+ return cutted_img
221
+
222
+
223
+ def crop_face(image_seq, landmarks):
224
+ # Interpolate the landmarks
225
+ preprocessed_landmarks = landmarks_interpolate(list(landmarks))
226
+ # crop the face to obtain a sequence of 96x96 sized mouth rois
227
+ crop_seq = crop_patch(image_seq, preprocessed_landmarks)
228
+
229
+ return crop_seq
230
+
231
+ def merge_audio_video(tmp_path, audio_path, save_video_path):
232
+ # Will merge the corresponding audio and video tracks of the clip. The associated .wav file will be removed.
233
+ command = f"ffmpeg -loglevel error -y -i {tmp_path} -i {audio_path} -c:v libx264 -c:a aac -ar 16000 -ac 1 {save_video_path}"
234
+ tval = subprocess.call(command, shell=True)
235
+ tval = subprocess.call(f'rm {tmp_path}', shell=True)
236
+ tval = subprocess.call(f'rm {audio_path}', shell=True)
237
+
238
+ def convert_ffmpeg(vid_path):
239
+ # converts the mpeg4 video to h264 using ffmpeg. Saves disk space, but takes additional time
240
+ tmp_path = vid_path[:-4] + 'temp2.mp4'
241
+ cmd = f"cp {vid_path} {tmp_path}"
242
+ tval = subprocess.call(cmd, shell=True)
243
+ cmd = f"ffmpeg -loglevel error -i {tmp_path} -r 25 -vcodec libx264 -q:v 1 -y {vid_path}"
244
+ tval = subprocess.call(cmd, shell=True)
245
+ tval = subprocess.call(f"rm {tmp_path}", shell=True)
246
+
247
+
248
+ def write_video(save_video_path, crop_seq, audio_path=None, merge_audio=False, use_ffmpeg=False):
249
+ # Writes the clip video to disk. Merges with audio if enabled
250
+ tmp_path = save_video_path.replace('.mp4','_temp.mp4') if merge_audio else save_video_path
251
+ vid_writer = cv2.VideoWriter(tmp_path, cv2.VideoWriter_fourcc(*'mp4v'), 25, (96, 96))
252
+ for ci in crop_seq:
253
+ vid_writer.write(ci)
254
+ vid_writer.release()
255
+ if use_ffmpeg and not merge_audio:
256
+ convert_ffmpeg(tmp_path)
257
+
258
+ if merge_audio:
259
+ merge_audio_video(tmp_path, audio_path, save_video_path)
260
+
261
+
262
+
visper_stats.png ADDED

Git LFS Details

  • SHA256: 66cda6bebbe21065f338038c6e7df09a6da25fb574f90db30ebaf24491c6e839
  • Pointer size: 128 Bytes
  • Size of remote file: 131 Bytes