url
stringclasses
675 values
text
stringlengths
0
9.95k
https://pyimagesearch.com/2019/12/16/training-a-custom-dlib-shape-predictor/
Examples of structural shapes include: Faces Hands Fingers Toes etc. For example, faces come in all different shapes and sizes, and they all share common structural characteristics — the eyes are above the nose, the nose is above the mouth, etc. The goal of shape/landmark predictors is to exploit this structural knowledge and given enough training data, learn how to automatically predict the location of these structures. How do shape/landmark predictors work? Figure 2: How do shape/landmark predictors work? The dlib library implements a shape predictor algorithm with an ensemble of regression trees approach using the method described by Kazemi and Sullivan in their 2014 CVPR paper (image source). There are a variety of shape predictor algorithms. Exactly which one you use depends on whether: You’re working with 2D or 3D data You need to utilize deep learning Or, if traditional Computer Vision and Machine Learning algorithms will suffice The shape predictor algorithm implemented in the dlib library comes from Kazemi and Sullivan’s 2014 CVPR paper, One Millisecond Face Alignment with an Ensemble of Regression Trees. To estimate the landmark locations, the algorithm: Examines a sparse set of input pixel intensities (i.e., the “features” to the input model) Passes the features into an Ensemble of Regression Trees (ERT) Refines the predicted locations to improve accuracy through a cascade of regressors The end result is a shape predictor that can run in super real-time! For more details on the inner-workings of the landmark prediction, be sure to refer to Kazemi and Sullivan’s 2014 publication.
https://pyimagesearch.com/2019/12/16/training-a-custom-dlib-shape-predictor/
The iBUG 300-W dataset Figure 3: In this tutorial we will use the iBUG 300-W face landmark dataset to learn how to train a custom dlib shape predictor. To train our custom dlib shape predictor, we’ll be utilizing the iBUG 300-W dataset (but with a twist). The goal of iBUG-300W is to train a shape predictor capable of localizing each individual facial structure, including the eyes, eyebrows, nose, mouth, and jawline. The dataset itself consists of 68 pairs of integer values — these values are the (x, y)-coordinates of the facial structures depicted in Figure 2 above. To create the iBUG-300W dataset, researchers manually and painstakingly annotated and labeled each of the 68 coordinates on a total of 7,764 images. A model trained on iBUG-300W can predict the location of each of these 68 (x, y)-coordinate pairs and can, therefore, localize each of the locations on the face. That’s all fine and good… …but what if we wanted to train a shape predictor to localize just the eyes? How might we go about doing that? Balancing shape predictor model speed and accuracy Figure 4: We will train a custom dlib shape/landmark predictor to recognize just eyes in this tutorial. Let’s suppose for a second that you want to train a custom shape predictor to localize just the location of the eyes.
https://pyimagesearch.com/2019/12/16/training-a-custom-dlib-shape-predictor/
We would have two options to accomplish this task: Utilize dlib’s pre-trained facial landmark detector used to localize all facial structures and then discard all localizations except for the eyes. Train our own custom dlib landmark predictor that returns just the locations of the eyes. In some cases you may be able to get away with the first option; however, there are two problems there, namely regarding your model speed and your model size. Model speed: Even though you’re only interested in a subset of the landmark predictions, your model is still responsible for predicting the entire set of landmarks. You can’t just tell your model “Oh hey, just give me those locations, don’t bother computing the rest.” It doesn’t work like that — it’s an “all or nothing” calculation. Model size: Since your model needs to know how to predict all landmark locations it was trained on, it therefore needs to store quantified information on how to predict each of these locations. The more information it needs to store, the larger your model size is. Think of your shape predictor model size as a grocery list — out of a list of 20 items, you may only truly need eggs and a gallon of milk, but if you’re heading to the store, you’re going to be purchasing all the items on that list because that’s what your family expects you to do! The model size is the same way.
https://pyimagesearch.com/2019/12/16/training-a-custom-dlib-shape-predictor/
Your model doesn’t “care” that you only truly “need” a subset of the landmark predictions; it was trained to predict all of them so you’re going to get all of them in return! If you only need a subset of specific landmarks you should consider training your own custom shape predictor — you’ll end up with a model that is both smaller and faster. In the context of today’s tutorial, we’ll be training a custom dlib shape predictor to localize just the eye locations from the iBUG 300-W dataset. Such a model could be utilized in a virtual makeover application used to apply just eyeliner/mascara or it could be used in a drowsiness detector used to detect tired drivers behind the wheel of a car. Configuring your dlib development environment To follow along with today’s tutorial, you will need a virtual environment with the following packages installed: dlib OpenCV imutils Luckily, each of these packages is pip-installable, but there are a handful of pre-requisites including virtual environments. Be sure to follow these two guides for additional information: Install dlib (the easy, complete guide) pip install opencv The pip install commands include: $ workon <env-name> $ pip install dlib $ pip install opencv-contrib-python $ pip install imutils The workon command becomes available once you install virtualenv and virtualenvwrapper per either my dlib or OpenCV installation guides. Downloading the iBUG 300-W dataset Before we get too far into this tutorial, take a second now to download the iBUG 300-W dataset (~1.7GB): http://dlib.net/files/data/ibug_300W_large_face_landmark_dataset.tar.gz You’ll also want to use the “Downloads” section of this blog post to download the source code. I recommend placing the iBug 300W dataset into the zip associated with the download of this tutorial like this: $ unzip custom-dlib-shape-predictor.zip ... $ cd custom-dlib-shape-predictor $ mv ~/Downloads/ibug_300W_large_face_landmark_dataset.tar.gz . $ tar -xvf ibug_300W_large_face_landmark_dataset.tar.gz ... Alternatively (i.e. rather than clicking the hyperlink above), use wget in your terminal to download the dataset directly: $ unzip custom-dlib-shape-predictor.zip ... $ cd custom-dlib-shape-predictor $ wget http://dlib.net/files/data/ibug_300W_large_face_landmark_dataset.tar.gz $ tar -xvf ibug_300W_large_face_landmark_dataset.tar.gz ... From there you can follow along with the rest of the tutorial. Project Structure Assuming you have followed the instructions in the previous section, your project directory is now organized as follows: $ tree --dirsfirst --filelimit 10 .
https://pyimagesearch.com/2019/12/16/training-a-custom-dlib-shape-predictor/
├── ibug_300W_large_face_landmark_dataset │   ├── afw [1011 entries] │   ├── helen │   │   ├── testset [990 entries] │   │   └── trainset [6000 entries] │   ├── ibug [405 entries] │   ├── image_metadata_stylesheet.xsl │   ├── labels_ibug_300W.xml │   ├── labels_ibug_300W_test.xml │   ├── labels_ibug_300W_train.xml │   └── lfpw │   ├── testset [672 entries] │   └── trainset [2433 entries] ├── ibug_300W_large_face_landmark_dataset.tar.gz ├── eye_predictor.dat ├── parse_xml.py ├── train_shape_predictor.py ├── evaluate_shape_predictor.py └── predict_eyes.py 9 directories, 10 files The iBug 300-W dataset is extracted in the ibug_300W_large_face_landmark_dataset/ directory. We will review the following Python scripts in this order: parse_xml.py : Parses the train/test XML dataset files for eyes-only landmark coordinates. train_shape_predictor.py : Accepts the parsed XML files to train our shape predictor with dlib. evaluate_shape_predictor.py : Calculates the Mean Average Error (MAE) of our custom shape predictor. predict_eyes.py : Performs shape prediction using our custom dlib shape predictor, trained to only recognize eye landmarks. We’ll begin by inspecting our input XML files in the next section. Understanding the iBUG-300W XML file structure We’ll be using the iBUG-300W to train our shape predictor; however, we have a bit of a problem: iBUG-300W supplies (x, y)-coordinate pairs for all facial structures in the dataset (i.e., eyebrows, eyes, nose, mouth, and jawline)… …however, we want to train our shape predictor on just the eyes! So, what are we going to do? Are we going to find another dataset that doesn’t include the facial structures we don’t care about? Manually open up the training file and delete the coordinate pairs for the facial structures we don’t need?
https://pyimagesearch.com/2019/12/16/training-a-custom-dlib-shape-predictor/
Simply give up, take our ball, and go home? Of course not! We’re programmers and engineers — all we need is some basic file parsing to create a new training file that includes just the eye coordinates. To understand how we can do that, let’s first consider how facial landmarks are annotated in the iBUG-300W dataset by examining the labels_ibug_300W_train.xml training file: ... <images> <image file='lfpw/trainset/image_0457.png'> <box top='78' left='74' width='138' height='140'> <part name='00' x='55' y='141'/> <part name='01' x='59' y='161'/> <part name='02' x='66' y='182'/> <part name='03' x='75' y='197'/> <part name='04' x='90' y='209'/> <part name='05' x='108' y='220'/> <part name='06' x='131' y='226'/> <part name='07' x='149' y='232'/> <part name='08' x='167' y='230'/> <part name='09' x='181' y='225'/> <part name='10' x='184' y='208'/> <part name='11' x='186' y='193'/> <part name='12' x='185' y='179'/> <part name='13' x='184' y='167'/> <part name='14' x='186' y='152'/> <part name='15' x='185' y='142'/> <part name='16' x='181' y='133'/> <part name='17' x='95' y='128'/> <part name='18' x='105' y='121'/> <part name='19' x='117' y='117'/> <part name='20' x='128' y='115'/> <part name='21' x='141' y='116'/> <part name='22' x='156' y='115'/> <part name='23' x='162' y='110'/> <part name='24' x='169' y='108'/> <part name='25' x='175' y='108'/> <part name='26' x='180' y='109'/> <part name='27' x='152' y='127'/> <part name='28' x='157' y='136'/> <part name='29' x='162' y='145'/> <part name='30' x='168' y='154'/> <part name='31' x='152' y='166'/> <part name='32' x='158' y='166'/> <part name='33' x='163' y='168'/> <part name='34' x='167' y='166'/> <part name='35' x='171' y='164'/> <part name='36' x='111' y='134'/> <part name='37' x='116' y='130'/> <part name='38' x='124' y='128'/> <part name='39' x='129' y='130'/> <part name='40' x='125' y='134'/> <part name='41' x='118' y='136'/> <part name='42' x='161' y='127'/> <part name='43' x='166' y='123'/> <part name='44' x='173' y='122'/> <part name='45' x='176' y='125'/> <part name='46' x='173' y='129'/> <part name='47' x='167' y='129'/> <part name='48' x='139' y='194'/> <part name='49' x='151' y='186'/> <part name='50' x='159' y='180'/> <part name='51' x='163' y='182'/> <part name='52' x='168' y='180'/> <part name='53' x='173' y='183'/> <part name='54' x='176' y='189'/> <part name='55' x='174' y='193'/> <part name='56' x='170' y='197'/> <part name='57' x='165' y='199'/> <part name='58' x='160' y='199'/> <part name='59' x='152' y='198'/> <part name='60' x='143' y='194'/> <part name='61' x='159' y='186'/> <part name='62' x='163' y='187'/> <part name='63' x='168' y='186'/> <part name='64' x='174' y='189'/> <part name='65' x='168' y='191'/> <part name='66' x='164' y='192'/> <part name='67' x='160' y='192'/> </box> </image> ... All training data in the iBUG-300W dataset is represented by a structured XML file. Each image has an image tag. Inside the image tag is a file attribute that points to where the example image file resides on disk. Additionally, each image has a box element associated with it. The box element represents the bounding box coordinates of the face in the image. To understand how the box element represents the bounding box of the face, consider its four attributes: top: The starting y-coordinate of the bounding box. left: The starting x-coordinate of the bounding box.
https://pyimagesearch.com/2019/12/16/training-a-custom-dlib-shape-predictor/
width: The width of the bounding box. height: The height of the bounding box. Inside the box element we have a total of 68 part elements — these part elements represent the individual (x, y)-coordinates of the facial landmarks in the iBUG-300W dataset. Notice that each part element has three attributes: name: The index/name of the specific facial landmark. x: The x-coordinate of the landmark. y: The y-coordinate of the landmark. So, how do these landmarks map to specific facial structures? The answer lies in the following figure: Figure 5: Visualizing the 68 facial landmark coordinates from the iBUG 300-W dataset. The coordinates in Figure 5 are 1-indexed so to map the coordinate name to our XML file, simply subtract 1 from the value (since our XML file is 0-indexed). Based on the visualization, we can then derive which name coordinates maps to which facial structure: The mouth can be accessed through points [48, 68].
https://pyimagesearch.com/2019/12/16/training-a-custom-dlib-shape-predictor/
The right eyebrow through points [17, 22]. The left eyebrow through points [22, 27]. The right eye using [36, 42]. The left eye with [42, 48]. The nose using [27, 35]. And the jaw via [0, 17]. Since we’re only interested in the eyes, we therefore need to parse out points [36, 48), again keeping in mind that: Our coordinates are zero-indexed in the XML file And the closing parenthesis “)” in [36, 48) is mathematical notation implying “non-inclusive”. Now that we understand the structure of the iBUG-300W training file, we can move on to parsing out only the eye coordinates. Building an “eyes only” shape predictor dataset Let’s create a Python script to parse the iBUG-300W XML files and extract only the eye coordinates (which we’ll then train a custom dlib shape predictor on in the following section). Open up the parse_xml.py file and we’ll get started: # import the necessary packages import argparse import re # construct the argument parser and parse the arguments ap = argparse.
https://pyimagesearch.com/2019/12/16/training-a-custom-dlib-shape-predictor/
ArgumentParser() ap.add_argument("-i", "--input", required=True, help="path to iBug 300-W data split XML file") ap.add_argument("-t", "--output", required=True, help="path output data split XML file") args = vars(ap.parse_args()) Lines 2 and 3 import necessary packages. We’ll use two of Python’s built-in modules: (1) argparse for parsing command line arguments, and (2) re for regular expression matching. If you ever need help developing regular expressions, regex101.com is a great tool and supports languages other than Python as well. Our script requires two command line arguments: --input : The path to our input data split XML file (i.e. from the iBug 300-W dataset). --output : The path to our output eyes-only XML file. Let’s go ahead and define the indices of our eye coordinates: # in the iBUG 300-W dataset, each (x, y)-coordinate maps to a specific # facial feature (i.e., eye, mouth, nose, etc.) -- in order to train a # dlib shape predictor on *just* the eyes, we must first define the # integer indexes that belong to the eyes LANDMARKS = set(list(range(36, 48))) Our eye landmarks are specified on Line 17. Refer to Figure 5, keeping in mind that the figure is 1-indexed while Python is 0-indexed. We’ll be training our custom shape predictor on eye locations; however, you could just as easily train an eyebrow, nose, mouth, or jawline predictor, including any combination or subset of these structures, by modifying the LANDMARKS list and including the 0-indexed names of the landmarks you want to detect. Now let’s define our regular expression and load the original input XML file: # to easily parse out the eye locations from the XML file we can # utilize regular expressions to determine if there is a 'part' # element on any given line PART = re.compile("part name='[0-9]+'") # load the contents of the original XML file and open the output file # for writing print("[INFO] parsing data split XML file...") rows = open(args["input"]).read().strip().split("\n") output = open(args["output"], "w") Our regular expression on Line 22 will soon enable extracting part elements along with their names/indexes.
https://pyimagesearch.com/2019/12/16/training-a-custom-dlib-shape-predictor/
Line 27 loads the contents of input XML file. Line 28 opens our output XML file for writing. Now we’re ready to loop over the input XML file to find and extract the eye landmarks: # loop over the rows of the data split file for row in rows: # check to see if the current line has the (x, y)-coordinates for # the facial landmarks we are interested in parts = re.findall(PART, row) # if there is no information related to the (x, y)-coordinates of # the facial landmarks, we can write the current line out to disk # with no further modifications if len(parts) == 0: output.write("{}\n".format(row)) # otherwise, there is annotation information that we must process else: # parse out the name of the attribute from the row attr = "name='" i = row.find(attr) j = row.find("'", i + len(attr) + 1) name = int(row[i + len(attr):j]) # if the facial landmark name exists within the range of our # indexes, write it to our output file if name in LANDMARKS: output.write("{}\n".format(row)) # close the output file output.close() Line 31 begins a loop over the rows of the input XML file. Inside the loop, we perform the following tasks: Determine if the current row contains a part element via regular expression matching (Line 34). If it does not contain a part element, write the row back out to file (Lines 39 and 40). If it does contain a part element, we need to parse it further (Lines 43-53). Here we extract name attribute from the part. And then check to see if the name exists in the LANDMARKS we want to train a shape predictor to localize. If so, we write the row back out to disk (otherwise we ignore the particular name as it’s not a landmark we want to localize). Wrap up the script by closing our output XML file (Line 56).
https://pyimagesearch.com/2019/12/16/training-a-custom-dlib-shape-predictor/
Note: Most of our parse_xml.py script was inspired by Luca Anzalone’s slice_xml function from their GitHub repo. A big thank you to Luca for putting together such a simple, concise script that is highly effective! Creating our training and testing splits Figure 6: Creating our “eye only” face landmark training/testing XML files for training a dlib custom shape predictor with Python. At this point in the tutorial I assume you have both: Downloaded the iBUG-300W dataset from the “Downloading the iBUG 300-W dataset” section above Used the “Downloads” section of this tutorial to download the source code. You can use the following command to generate our new training file by parsing only the eye landmark coordinates from the original training file: $ python parse_xml.py \ --input ibug_300W_large_face_landmark_dataset/labels_ibug_300W_train.xml \ --output ibug_300W_large_face_landmark_dataset/labels_ibug_300W_train_eyes.xml [INFO] parsing data split XML file... Similarly, you can do the same to create our new testing file: $ python parse_xml.py \ --input ibug_300W_large_face_landmark_dataset/labels_ibug_300W_test.xml \ --output ibug_300W_large_face_landmark_dataset/labels_ibug_300W_test_eyes.xml [INFO] parsing data split XML file... To verify that our new training/testing files have been created, check your iBUG-300W root dataset directory for the labels_ibug_300W_train_eyes.xml and labels_ibug_300W_test_eyes.xml files: $ cd ibug_300W_large_face_landmark_dataset $ ls -lh *.xml -rw-r--r--@ 1 adrian staff 21M Aug 16 2014 labels_ibug_300W.xml -rw-r--r--@ 1 adrian staff 2.8M Aug 16 2014 labels_ibug_300W_test.xml -rw-r--r-- 1 adrian staff 602K Dec 12 12:54 labels_ibug_300W_test_eyes.xml -rw-r--r--@ 1 adrian staff 18M Aug 16 2014 labels_ibug_300W_train.xml -rw-r--r-- 1 adrian staff 3.9M Dec 12 12:54 labels_ibug_300W_train_eyes.xml $ cd .. Notice that our *_eyes.xml files are highlighted. Both of these files are significantly smaller in filesize than their original, non-parsed counterparts. Implementing our custom dlib shape predictor training script Our dlib shape predictor training script is loosely based on (1) dlib’s official example and (2) Luca Anzalone’s excellent 2018 article. My primary contributions here are to: Supply a complete end-to-end example of creating a custom dlib shape predictor, including: Training the shape predictor on a training set Evaluating the shape predictor on a testing set Use the shape predictor to make predictions on custom images/video streams. Provide additional commentary on the hyperparameters you should be tuning. Demonstrate how to systematically tune your shape predictor hyperparameters to balance speed, model size, and accuracy (next week’s tutorial).
https://pyimagesearch.com/2019/12/16/training-a-custom-dlib-shape-predictor/
To learn how to train your own dlib shape predictor, open up the train_shape_predictor.py file in your project structure and insert the following code: # import the necessary packages import multiprocessing import argparse import dlib # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-t", "--training", required=True, help="path to input training XML file") ap.add_argument("-m", "--model", required=True, help="path serialized dlib shape predictor model") args = vars(ap.parse_args()) Lines 2-4 import our packages, namely dlib. The dlib toolkit is a package developed by PyImageConf 2018 speaker, Davis King. We will use dlib to train our shape predictor. The multiprocessing library will be used to grab and set the number of threads/processes we will use for training our shape predictor. Our script requires two command line arguments (Lines 7-12): --training : The path to our input training XML file. We will use the eyes-only XML file generated by the previous two sections. --model : The path to the serialized dlib shape predictor output file. From here we need to set options (i.e., hyperparameters) prior to training the shape predictor. While the following code blocks could be condensed into just 11 lines of code, the comments in both the code and in this tutorial provide additional information to help you both (1) understand the key options, and (2) configure and tune the options/hyperparameters for optimal performance.
https://pyimagesearch.com/2019/12/16/training-a-custom-dlib-shape-predictor/
In the remaining code blocks in this section I’ll be discussing the 7 most important hyperparameters you can tune/set when training your own custom dlib shape predictor. These values are: tree_depth nu cascade_depth feature_pool_size num_test_splits oversampling_amount oversampling_translation_jitter We’ll begin with grabbing the default dlib shape predictor options: # grab the default options for dlib's shape predictor print("[INFO] setting shape predictor options...") options = dlib.shape_predictor_training_options() From there, we’ll configure the tree_depth option: # define the depth of each regression tree -- there will be a total # of 2^tree_depth leaves in each tree; small values of tree_depth # will be *faster* but *less accurate* while larger values will # generate trees that are *deeper*, *more accurate*, but will run # *far slower* when making predictions options.tree_depth = 4 Here we define the tree_depth, which, as the name suggests, controls the depth of each regression tree in the Ensemble of Regression Trees (ERTs). There will be 2^tree_depth leaves in each tree — you must be careful to balance depth with speed. Smaller values of tree_depth will lead to more shallow trees that are faster, but potentially less accurate. Larger values of tree_depth will create deeper trees that are slower, but potentially more accurate. Typical values for tree_depth are in the range [2, 8]. The next parameter we’re going to explore is nu, a regularization parameter: # regularization parameter in the range [0, 1] that is used to help # our model generalize -- values closer to 1 will make our model fit # the training data better, but could cause overfitting; values closer # to 0 will help our model generalize but will require us to have # training data in the order of 1000s of data points options.nu = 0.1 The nu option is a floating-point value (in the range [0, 1]) used as a regularization parameter to help our model generalize. Values closer to 1 will make our model fit the training data closer, but could potentially lead to overfitting. Values closer to 0 will help our model generalize; however, there is a caveat to the generalization power — the closer nu is to 0, the more training data you’ll need. Typically, for small values of nu you’ll need 1000s of training examples.
https://pyimagesearch.com/2019/12/16/training-a-custom-dlib-shape-predictor/
Our next parameter is the cascade_depth: # the number of cascades used to train the shape predictor -- this # parameter has a *dramtic* impact on both the *accuracy* and *output # size* of your model; the more cascades you have, the more accurate # your model can potentially be, but also the *larger* the output size options.cascade_depth = 15 A series of cascades is used to refine and tune the initial predictions from the ERTs — the cascade_depth will have a dramatic impact on both the accuracy and the output file size of your model. The more cascades you allow for, the larger your model will become (but potentially more accurate). The fewer cascades you allow, the smaller your model will be (but could be less accurate). The following figure from Kazemi and Sullivan’s paper demonstrates the impact that the cascade_depth has on facial landmark alignment: Figure 7: The cascade_depth parameter has a significant impact on the accuracy of your custom dlib shape/landmark predictor model. Clearly you can see that the deeper the cascade, the better the facial landmark alignment. Typically you’ll want to explore cascade_depth values in the range [6, 18], depending on your required target model size and accuracy. Let’s now move on to the feature_pool_size: # number of pixels used to generate features for the random trees at # each cascade -- larger pixel values will make your shape predictor # more accurate, but slower; use large values if speed is not a # problem, otherwise smaller values for resource constrained/embedded # devices options.feature_pool_size = 400 The feature_pool_size controls the number of pixels used to generate features for the random trees in each cascade. The more pixels you include, the slower your model will run (but could potentially be more accurate). The fewer pixels you take into account, the faster your model will run (but could also be less accurate). My recommendation here is that you should use large values for feature_pools_size if inference speed is not a concern.
https://pyimagesearch.com/2019/12/16/training-a-custom-dlib-shape-predictor/
Otherwise, you should use smaller values for faster prediction speed (typically for embedded/resource-constrained devices). The next parameter we’re going to set is the num_test_splits: # selects best features at each cascade when training -- the larger # this value is, the *longer* it will take to train but (potentially) # the more *accurate* your model will be options.num_test_splits = 50 The num_test_splits parameter has a dramatic impact on how long it takes your model to train (i.e., training/wall clock time, not inference speed). The more num_test_splits you consider, the more likely you’ll have an accurate shape predictor — but again, be cautious with this parameter as it can cause training time to explode. Let’s check out the oversampling_amount next: # controls amount of "jitter" (i.e., data augmentation) when training # the shape predictor -- applies the supplied number of random # deformations, thereby performing regularization and increasing the # ability of our model to generalize options.oversampling_amount = 5 The oversampling_amount controls the amount of data augmentation applied to our training data. The dlib library causes data augmentation jitter, but it is essentially the same idea as data augmentation. Here we are telling dlib to apply a total of 5 random deformations to each input image. You can think of the oversampling_amount as a regularization parameter as it may lower training accuracy but increase testing accuracy, thereby allowing our model to generalize better. Typical oversampling_amount values lie in the range [0, 50] where 0 means no augmentation and 50 is a 50x increase in your training dataset. Be careful with this parameter! Larger oversampling_amount values may seem like a good idea but they can dramatically increase your training time.
https://pyimagesearch.com/2019/12/16/training-a-custom-dlib-shape-predictor/
Next comes the oversampling_translation_jitter option: # amount of translation jitter to apply -- the dlib docs recommend # values in the range [0, 0.5] options.oversampling_translation_jitter = 0.1 The oversampling_translation_jitter controls the amount of translation augmentation applied to our training dataset. Typical values for translation jitter lie in the range [0, 0.5]. The be_verbose option simply instructs dlib to print out status messages as our shape predictor is training: # tell the dlib shape predictor to be verbose and print out status # messages our model trains options.be_verbose = True Finally, we have the num_threads parameter: # number of threads/CPU cores to be used when training -- we default # this value to the number of available cores on the system, but you # can supply an integer value here if you would like options.num_threads = multiprocessing.cpu_count() This parameter is extremely important as it can dramatically speed up the time it takes to train your model! The more CPU threads/cores you can supply to dlib, the faster your model will train. We’ll default this value to the total number of CPUs on our system; however, you can set this value as any integer (provided it’s less-than-or-equal-to the number of CPUs on your system). Now that our options are set, the final step is to simply call train_shape_predictor: # log our training options to the terminal print("[INFO] shape predictor options:") print(options) # train the shape predictor print("[INFO] training shape predictor...") dlib.train_shape_predictor(args["training"], args["model"], options) The dlib library accepts (1) the path to our training XML file, (2) the path to our output shape predictor model, and (3) our set of options. Once trained the shape predictor will be serialized to disk so we can later use it. While this script may have appeared especially easy, be sure to spend time configuring your options/hyperparameters for optimal performance. Training the custom dlib shape predictor We are now ready to train our custom dlib shape predictor! Make sure you have (1) downloaded the iBUG-300W dataset and (2) used the “Downloads” section of this tutorial to download the source code to this post.
https://pyimagesearch.com/2019/12/16/training-a-custom-dlib-shape-predictor/
Once you have done so, you are ready to train the shape predictor: $ python train_shape_predictor.py \ --training ibug_300W_large_face_landmark_dataset/labels_ibug_300W_train_eyes.xml \ --model eye_predictor.dat [INFO] setting shape predictor options... [INFO] shape predictor options: shape_predictor_training_options(be_verbose=1, cascade_depth=15, tree_depth=4, num_trees_per_cascade_level=500, nu=0.1, oversampling_amount=5, oversampling_translation_jitter=0.1, feature_pool_size=400, lambda_param=0.1, num_test_splits=50, feature_pool_region_padding=0, random_seed=, num_threads=20, landmark_relative_padding_mode=1) [INFO] training shape predictor... Training with cascade depth: 15 Training with tree depth: 4 Training with 500 trees per cascade level. Training with nu: 0.1 Training with random seed: Training with oversampling amount: 5 Training with oversampling translation jitter: 0.1 Training with landmark_relative_padding_mode: 1 Training with feature pool size: 400 Training with feature pool region padding: 0 Training with 20 threads. Training with lambda_param: 0.1 Training with 50 split tests. Fitting trees... Training complete Training complete, saved predictor to file eye_predictor.dat The entire training process took 9m11s on my 3 GHz Intel Xeon W processor. To verify that your shape predictor has been serialized to disk, ensure that eye_predictor.dat has been created in your directory structure: $ ls -lh *.dat -rw-r--r--@ 1 adrian staff 18M Dec 4 17:15 eye_predictor.dat As you can see, the output model is only 18MB — that’s quite the reduction in file size compared to dlib’s standard/default facial landmark predictor which is 99.7MB! Implementing our shape predictor evaluation script Now that we’ve trained our dlib shape predictor, we need to evaluate its performance on both our training and testing sets to verify that it’s not overfitting and that our results will (ideally) generalize to our own images outside the training set. Open up the evaluate_shape_predictor.py file and insert the following code: # import the necessary packages import argparse import dlib # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-p", "--predictor", required=True, help="path to trained dlib shape predictor model") ap.add_argument("-x", "--xml", required=True, help="path to input training/testing XML file") args = vars(ap.parse_args()) # compute the error over the supplied data split and display it to # our screen print("[INFO] evaluating shape predictor...") error = dlib.test_shape_predictor(args["xml"], args["predictor"]) print("[INFO] error: {}".format(error)) Lines 2 and 3 indicate that we need both argparse and dlib to evaluate our shape predictor. Our command line arguments include: --predictor : The path to our serialized shape predictor model that we generated via the previous two “Training” sections. --xml : The path to the input training/testing XML file (i.e. our eyes-only parsed XML files).
https://pyimagesearch.com/2019/12/16/training-a-custom-dlib-shape-predictor/
When both of these arguments are provided via the command line, dlib will handle evaluation (Line 16). Dlib handles computing the mean average error (MAE) between the predicted landmark coordinates and the ground-truth landmark coordinates. The smaller the MAE, the better the predictions. Shape prediction accuracy results If you haven’t yet, use the “Downloads” section of this tutorial to download the source code and pre-trained shape predictor. From there, execute the following command to evaluate our eye landmark predictor on the training set: $ python evaluate_shape_predictor.py --predictor eye_predictor.dat \ --xml ibug_300W_large_face_landmark_dataset/labels_ibug_300W_train_eyes.xml [INFO] evaluating shape predictor... [INFO] error: 3.631152776257545 Here we are obtaining an MAE of ~3.63. Let’s now run the same command on our testing set: $ python evaluate_shape_predictor.py --predictor eye_predictor.dat \ --xml ibug_300W_large_face_landmark_dataset/labels_ibug_300W_test_eyes.xml [INFO] evaluating shape predictor... [INFO] error: 7.568211111799696 As you can see the MAE is twice as large on our testing set versus our training set. If you have any prior experience working with machine learning or deep learning algorithms you know that in most situations, your training loss will be lower than your testing loss. That doesn’t mean that your model is performing badly — instead, it simply means that your model is doing a better job modeling the training data versus the testing data. Shape predictors are especially interesting to evaluate as it’s not just the MAE that needs to be examined! You also need to visually validate the results and verify the shape predictor is working as expected — we’ll cover that topic in the next section.
https://pyimagesearch.com/2019/12/16/training-a-custom-dlib-shape-predictor/
Implementing the shape predictor inference script Now that we have our shape predictor trained, we need to visually validate that the results look good by applying it to our own example images/video. In this section we will: Load our trained dlib shape predictor from disk. Access our video stream. Apply the shape predictor to each individual frame. Verify that the results look good. Let’s get started. Open up predict_eyes.py and insert the following code: # import the necessary packages from imutils.video import VideoStream from imutils import face_utils import argparse import imutils import time import dlib import cv2 # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-p", "--shape-predictor", required=True, help="path to facial landmark predictor") args = vars(ap.parse_args()) Lines 2-8 import necessary packages. In particular we will use imutils and OpenCV (cv2) in this script. Our VideoStream class will allow us to access our webcam.
https://pyimagesearch.com/2019/12/16/training-a-custom-dlib-shape-predictor/
The face_utils module contains a helper function used to convert dlib’s landmark predictions to a NumPy array. The only command line argument required for this script is the path to our trained facial landmark predictor, --shape-predictor . Let’s perform three initializations: # initialize dlib's face detector (HOG-based) and then load our # trained shape predictor print("[INFO] loading facial landmark predictor...") detector = dlib.get_frontal_face_detector() predictor = dlib.shape_predictor(args["shape_predictor"]) # initialize the video stream and allow the cammera sensor to warmup print("[INFO] camera sensor warming up...") vs = VideoStream(src=0).start() time.sleep(2.0) Our initializations include: Loading the face detector (Line 19). The detector allows us to find a face in an image/video prior to localizing landmarks on the face. We’ll be using dlib’s HOG + Linear SVM face detector. Alternatively, you could use Haar cascades (great for resource-constrained, embedded devices) or a more accurate deep learning face detector. Loading the facial landmark predictor (Line 20). Initializing our webcam stream (Line 24). Now we’re ready to loop over frames from our camera: # loop over the frames from the video stream while True: # grab the frame from the video stream, resize it to have a # maximum width of 400 pixels, and convert it to grayscale frame = vs.read() frame = imutils.resize(frame, width=400) gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # detect faces in the grayscale frame rects = detector(gray, 0) Lines 31-33 grab a frame, resize it, and convert to grayscale. Line 36 applies face detection using dlib’s HOG + Linear SVM algorithm.
https://pyimagesearch.com/2019/12/16/training-a-custom-dlib-shape-predictor/
Let’s process the faces detected in the frame by predicting and drawing facial landmarks: # loop over the face detections for rect in rects: # convert the dlib rectangle into an OpenCV bounding box and # draw a bounding box surrounding the face (x, y, w, h) = face_utils.rect_to_bb(rect) cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2) # use our custom dlib shape predictor to predict the location # of our landmark coordinates, then convert the prediction to # an easily parsable NumPy array shape = predictor(gray, rect) shape = face_utils.shape_to_np(shape) # loop over the (x, y)-coordinates from our dlib shape # predictor model draw them on the image for (sX, sY) in shape: cv2.circle(frame, (sX, sY), 1, (0, 0, 255), -1) Line 39 begins a loop over the detected faces. Inside the loop, we: Take dlib’s rectangle object and convert it to OpenCV’s standard (x, y, w, h) bounding box ordering (Line 42). Draw the bounding box surrounding the face (Line 43). Use our custom dlib shape predictor to predict the location of our landmarks (i.e., eyes) via Line 48. Convert the returned coordinates to a NumPy array (Line 49). Loop over the predicted landmark coordinates and draw them individually as small dots on the output frame (Line 53 and 54). If you need a refresher on drawing rectangles and solid circles, refer to my OpenCV Tutorial. To wrap up we’ll display the result! # show the frame cv2.imshow("Frame", frame) key = cv2.waitKey(1) & 0xFF # if the `q` key was pressed, break from the loop if key == ord("q"): break # do a bit of cleanup cv2.destroyAllWindows() vs.stop() Lines 57 displays the frame to the screen. If the q key is pressed at any point while we’re processing frames from our video stream, we’ll break and perform cleanup.
https://pyimagesearch.com/2019/12/16/training-a-custom-dlib-shape-predictor/
Making predictions with our dlib shape predictor Are you ready to see our custom shape predictor in action? If so, make sure you use the “Downloads” section of this tutorial to download the source code and pre-trained dlib shape predictor. From there you can execute the following command: $ python predict_eyes.py --shape-predictor eye_predictor.dat [INFO] loading facial landmark predictor... [INFO] camera sensor warming up...   As you can see, our shape predictor is both: Correctly localizing my eyes in the input video stream Running in real-time Again, I’d like to call your attention back to the “Balancing shape predictor model speed and accuracy” section of this tutorial — our model is not predicting all of the possible 68 landmark locations on the face! Instead, we have trained a custom dlib shape predictor that only localizes the eye regions. ( i.e., our model is not trained on the other facial structures in the iBUG-300W dataset including i.e., eyebrows, nose, mouth, and jawline). Our custom eye predictor can be used in situations where we don’t need the additional facial structures and only require the eyes, such as building an a drowsiness detector, building a virtual makeover application for eyeliner/mascara, or creating computer-assisted software to help disabled users utilize their computers. In next week’s tutorial, I’ll show you how to tune the hyperparameters to dlib’s shape predictor to obtain optimal performance. How do I create my own dataset for shape predictor training? To create your own shape predictor dataset you’ll need to use dlib’s imglab tool. Covering how to create and annotate your own dataset for shape predictor training is outside the scope of this blog post.
https://pyimagesearch.com/2019/12/16/training-a-custom-dlib-shape-predictor/
I’ll be covering it in a future tutorial here on PyImageSearch. What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do.
https://pyimagesearch.com/2019/12/16/training-a-custom-dlib-shape-predictor/
My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned how to train your own custom dlib shape/landmark predictor. To train our shape predictor we utilized the iBUG-300W dataset, only instead of training our model to recognize all facial structures (i.e., eyes, eyebrows, nose, mouth, and jawline), we instead trained the model to localize just the eyes. The end result is a model that is: Accurate: Our shape predictor can accurately predict/localize the location of the eyes on a face.
https://pyimagesearch.com/2019/12/16/training-a-custom-dlib-shape-predictor/
Small: Our eye landmark predictor is smaller than the pre-trained dlib face landmark predictor (18MB vs. 99.7MB, respectively). Fast: Our model is faster than dlib’s pre-trained facial landmark predictor as it predicts fewer locations (the hyperparameters to the model were also chosen to improve speed). In next week’s tutorial, I’ll teach you how to systemically tune the hyperparameters to dlib’s shape predictor training procedure to balance prediction speed, model size, and localization accuracy. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), just enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2019/12/23/tuning-dlib-shape-predictor-hyperparameters-to-balance-speed-accuracy-and-model-size/
Click here to download the source code to this pos In this tutorial, you will learn how to optimally tune dlib’s shape predictor hyperparameters and options to obtain a shape predictor that balances speed, accuracy, and model size. Today is part two in our two-part series on training custom shape predictors with dlib: Part #1: Training custom dlib shape predictors (last week’s tutorial) Part #2: Tuning dlib shape predictor hyperparameters to balance speed, accuracy, and model size (today’s tutorial) Many software developers and project managers are familiar with the concept of the “Iron Triangle”. When building software we need to balance: Good, high-quality software Software that can be delivered to the customer fast How expensive the software is (i.e., whether or not it’s cheap) The caveat is that we can only pick two of the above. Good, high-quality software that is delivered to the customer quickly is certainly not cheap. Similarly, software that was developed cheaply and delivered fast is likely not good. When training our own custom dlib shape predictors we have a similar problem — we need to balance: Speed: How fast the model can make predictions (i.e., inference speed). Accuracy: How precise and accurate our model is in its predictions. Model size: The larger the model is, the more space it takes up, and the more computational resources it requires. Smaller models are therefore preferred. But unlike the Iron Triangle of software development which only has three vertices, dlib’s shape predictor includes 7-10 options that you’ll typically want to tune.
https://pyimagesearch.com/2019/12/23/tuning-dlib-shape-predictor-hyperparameters-to-balance-speed-accuracy-and-model-size/
So, how do we go about tuning these shape predictor options and hyperparameters? I’ll be discussing that topic in the remainder of the post. Note: If you haven’t read last week’s post on training a dlib shape predictor, make sure you do so now, as the rest of this tutorial builds on it. To learn how to tune dlib’s shape predictor options to optimally balance speed, accuracy, and model size, just keep reading! Looking for the source code to this post? Jump Right To The Downloads Section Tuning dlib shape predictor hyperparameters to balance speed, accuracy, and model size In the first part of this tutorial, we’ll discuss why we need to tune the options to dlib’s shape predictor to obtain an optimal model for our particular project requirements and application. From there we’ll review discuss the dataset we’ll be using today to train our dlib shape predictor on. I’ll then show you how you can implement a Python script to automatically explore dlib’s shape predictor options. We’ll wrap up the tutorial by discussing how we can use the results of this script to set the options to dlib’s shape predictor, train it, and obtain an optimal model. Let’s get started!
https://pyimagesearch.com/2019/12/23/tuning-dlib-shape-predictor-hyperparameters-to-balance-speed-accuracy-and-model-size/
Why do we need to tune our shape predictor hyperparameters? Figure 1: In this tutorial, we will learn how to tune custom dlib shape predictor hyperparameters to balance speed, accuracy, and model size. When training our own custom dlib shape predictors we need to balance: Model speed Model accuracy Model size Typically we can only have 1-2 of these choices. Before you even open up your code editor or command line, first consider the goal of the project and where you shape predictor will be deployed: Will the shape predictor be used on an embedded device? If so, compromise on accuracy a bit and seek a model that is fast and small. Are you deploying the model to modern laptop/desktops? You may be able to get away with larger models that are more computationally expensive so don’t worry as much about model size and focus on maximizing accuracy. Is the output size of the model a concern? If your model needs to be deployed/updated over a network connection then you should seek a model that is as small as possible but still achieves reasonable accuracy. Is the amount of time it takes to train the model a concern?
https://pyimagesearch.com/2019/12/23/tuning-dlib-shape-predictor-hyperparameters-to-balance-speed-accuracy-and-model-size/
If so, pay attention to any jitter/data augmentation applied during the training process. Considering these options ahead of time will make it far easier for you to tune the options to dlib’s shape predictor — I’ll also show you my own tuning script that I use to help narrow in on shape predictor options that will work well for my respective use cases. The iBUG-300W dataset Figure 2: The iBug 300-W face landmark dataset is used to train a custom dlib shape predictor. We will tune custom dlib shape predictor hyperparameters in an effort to balance speed, accuracy, and model size. To train and tune our own custom dlib shape predictors, we’ll be using the iBUG 300-W dataset, the same dataset we used in last week’s tutorial. The iBUG 300-W dataset is used to train facial landmark predictors and localize the individual structures of the face, including: Eyebrows Eyes Nose Mouth Jawline However, we’ll be training our shape predictor to localize only the eyes — our model will not be trained on the other facial structures. For more details on the iBUG 300-W dataset, refer to last week’s blog post. Configuring your dlib development environment To follow along with today’s tutorial, you will need a virtual environment with the following packages installed: dlib OpenCV imutils scikit-learn Luckily, each of these packages is pip-installable, but there are a handful of pre-requisites (including Python virtual environments). Be sure to follow these two guides for additional information in configuring your development environment: Install dlib (the easy, complete guide) pip install opencv The pip install commands include: $ workon <env-name> $ pip install dlib $ pip install opencv-contrib-python $ pip install imutils $ pip install scikit-learn The workon command becomes available once you install virtualenv and virtualenvwrapper per either my dlib or OpenCV installation guides. Downloading the iBUG 300-W dataset Before we get too far into this tutorial, take a second now to download the iBUG 300-W dataset (~1.7GB): http://dlib.net/files/data/ibug_300W_large_face_landmark_dataset.tar.gz You’ll also want to use the “Downloads” section of this blog post to download the source code.
https://pyimagesearch.com/2019/12/23/tuning-dlib-shape-predictor-hyperparameters-to-balance-speed-accuracy-and-model-size/
I recommend placing the iBug 300-W dataset into the zip associated with the download of this tutorial like this: $ unzip tune-dlib-shape-predictor.zip ... $ cd tune-dlib-shape-predictor $ mv ~/Downloads/ibug_300W_large_face_landmark_dataset.tar.gz . $ tar -xvf ibug_300W_large_face_landmark_dataset.tar.gz ... Alternatively (i.e. rather than clicking the hyperlink above), use wget in your terminal to download the dataset directly: $ unzip tune-dlib-shape-predictor.zip ... $ cd tune-dlib-shape-predictor $ wget http://dlib.net/files/data/ibug_300W_large_face_landmark_dataset.tar.gz $ tar -xvf ibug_300W_large_face_landmark_dataset.tar.gz ... From there you can follow along with the rest of the tutorial. Project structure Assuming you have followed the instructions in the previous section, your project directory is now organized as follows: $ tree --dirsfirst --filelimit 15 . ├── ibug_300W_large_face_landmark_dataset │   ├── afw [1011 entries] │   ├── helen │   │   ├── testset [990 entries] │   │   └── trainset [6000 entries] │   ├── ibug [405 entries] │   ├── image_metadata_stylesheet.xsl │   ├── labels_ibug_300W.xml │   ├── labels_ibug_300W_test.xml │   ├── labels_ibug_300W_train.xml │   └── lfpw │   ├── testset [672 entries] │   └── trainset [2433 entries] ├── ibug_300W_large_face_landmark_dataset.tar.gz ├── pyimagesearch │   ├── __init__.py │   └── config.py ├── example.jpg ├── ibug_300W_large_face_landmark_dataset.tar.gz ├── optimal_eye_predictor.dat ├── parse_xml.py ├── predict_eyes.py ├── train_shape_predictor.py ├── trials.csv └── tune_predictor_hyperparams.py 2 directories, 15 files Last week, we reviewed the following Python scripts: parse_xml.py : Parses the train/test XML dataset files for eyes-only landmark coordinates. train_shape_predictor.py : Accepts the parsed XML files to train our shape predictor with dlib. evaluate_shape_predictor.py : Calculates the Mean Average Error (MAE) of our custom shape predictor. Not included in today’s download — similar/additional functionality is provided in today’s tuning script. predict_eyes.py : Performs shape prediction using our custom dlib shape predictor, trained to only recognize eye landmarks. Today we will review the following Python files: config.py : Our configuration paths, constants, and variables are all in one convenient location. tune_predictor_hyperparams.py : The heart of today’s tutorial lays here.
https://pyimagesearch.com/2019/12/23/tuning-dlib-shape-predictor-hyperparameters-to-balance-speed-accuracy-and-model-size/
This script determines all 6,075 combinations of dlib shape predictor hyperparameters. From there, we’ll randomly sample 100 combinations and proceed to train and evaluate those 100 models. The hyperparameters and evaluation criteria are output to a CSV file for inspection in a spreadsheet application of your choice. Preparing the iBUG-300W dataset for training Figure 3: Our custom dlib shape/landmark predictor recognizes just eyes. As mentioned in the “The iBUG-300W dataset” section above, we’ll be training our dlib shape predictor on just the eyes (i.e., not the eyebrows, nose, mouth or jawline). To accomplish that task, we first need to parse out any facial structures we are not interested in from the iBUG 300-W training/testing XML files. To get started, make sure you’ve: Used the “Downloads” section of this tutorial to download the source code. Used the “Downloading the iBUG-300W dataset” section above to download the iBUG-300W dataset. Reviewed the “Project structure” section. You’ll notice inside your directory structure for the project that there is a script named parse_xml.py — this script is used to parse out just the eye locations from the XML files.
https://pyimagesearch.com/2019/12/23/tuning-dlib-shape-predictor-hyperparameters-to-balance-speed-accuracy-and-model-size/
We reviewed this file in detail in last week’s tutorial so we’re not going to review it again here today (refer to last week’s post to understand how it works). Before you continue on with the rest of this tutorial you’ll need to execute the following command to prepare our “eyes only” training and testing XML files: $ python parse_xml.py \ --input ibug_300W_large_face_landmark_dataset/labels_ibug_300W_train.xml \ --output ibug_300W_large_face_landmark_dataset/labels_ibug_300W_train_eyes.xml [INFO] parsing data split XML file... $ python parse_xml.py \ --input ibug_300W_large_face_landmark_dataset/labels_ibug_300W_test.xml \ --output ibug_300W_large_face_landmark_dataset/labels_ibug_300W_test_eyes.xml [INFO] parsing data split XML file... To verify that our new training/testing files have been created, check your iBUG-300W root dataset directory for the labels_ibug_300W_train_eyes.xml and labels_ibug_300W_test_eyes.xml files: $ cd ibug_300W_large_face_landmark_dataset $ ls -lh *.xml -rw-r--r--@ 1 adrian staff 21M Aug 16 2014 labels_ibug_300W.xml -rw-r--r--@ 1 adrian staff 2.8M Aug 16 2014 labels_ibug_300W_test.xml -rw-r--r-- 1 adrian staff 602K Dec 12 12:54 labels_ibug_300W_test_eyes.xml -rw-r--r--@ 1 adrian staff 18M Aug 16 2014 labels_ibug_300W_train.xml -rw-r--r-- 1 adrian staff 3.9M Dec 12 12:54 labels_ibug_300W_train_eyes.xml $ cd .. Notice that our *_eyes.xml files are highlighted. Both of these files are significantly smaller in filesize than their original, non-parsed counterparts. Once you have performed these steps you can continue on with the rest of the tutorial. Reviewing our configuration file Before we get too far in this project, let’s first review our configuration file. Open up the config.py file and insert the following code: # import the necessary packages import os # define the path to the training and testing XML files TRAIN_PATH = os.path.join("ibug_300W_large_face_landmark_dataset", "labels_ibug_300W_train_eyes.xml") TEST_PATH = os.path.join("ibug_300W_large_face_landmark_dataset", "labels_ibug_300W_test_eyes.xml") Here we have the paths to training and testing XML files (i.e. the ones generated after we have parsed out the eye regions). Next, we’ll define a handful of constants for tuning dlib shape predictor hyperparameters: # define the path to the temporary model file TEMP_MODEL_PATH = "temp.dat" # define the path to the output CSV file containing the results of # our experiments CSV_PATH = "trials.csv" # define the path to the example image we'll be using to evaluate # inference speed using the shape predictor IMAGE_PATH = "example.jpg" # define the number of threads/cores we'll be using when trianing our # shape predictor models PROCS = -1 # define the maximum number of trials we'll be performing when tuning # our shape predictor hyperparameters MAX_TRIALS = 100 Our dlib tuning paths include: Our temporary shape predictor file used during option/hyperparameter tuning (Line 11). The CSV file used to store the results of our individual trials (Line 15). An example image we’ll be using to evaluate a given model’s inference speed (Line 19). Next, we’ll define a multiprocessing variable — the number of parallel threads/cores will be using when training our shape predictor (Line 23).
https://pyimagesearch.com/2019/12/23/tuning-dlib-shape-predictor-hyperparameters-to-balance-speed-accuracy-and-model-size/
A value of -1 indicates that all processor cores will be used for training. We’ll be working through combinations of hyperparameters to find the best performing model. Line 27 defines the maximum number of trials we’ll be performing when exploring the shape predictor hyperparameter space: Smaller values will result in the tune_predictor_hyperparams.py script completing faster, but will also explore fewer options. Larger values will require significantly more time for the tune_predictor_hyperparams.py script to complete and will explore more options, providing you with more results that you can then use to make better, more informed decisions on how to select your final shape predictor hyperparameters. If we were to find the best model out of 6,000+, it would take multiple weeks/months time to train and evaluate the shape predictor models even on a powerful computer; therefore, you should seek a balance with the MAX_TRIALS parameter. Implementing our dlib shape predictor tuning script If you followed last week’s post on training a custom dlib shape predictor, you’ll note that we hardcoded all of the options to our shape predictor. Hardcoding our hyperparameter values is a bit of a problem as it requires that we manually: Step #1: Update any training options. Step #2: Execute the script used to train the shape predictor. Step #3: Evaluate the newly trained shape predictor on our shape model. Step #4: Go back to Step #1 and repeat as necessary.
https://pyimagesearch.com/2019/12/23/tuning-dlib-shape-predictor-hyperparameters-to-balance-speed-accuracy-and-model-size/
The problem here is that these steps are a manual process, requiring us to intervene at each and every step. Instead, it would be better if we could create a Python script that automatically handles the tuning process for us. We could define the options and corresponding values we want to explore. Our script would determine all possible combinations of these parameters. It would then train a shape predictor on these options, evaluate it, and then proceed to the next set of options. Once the script completes running we can examine the results, select the best parameters to achieve our balance of model speed, size, and accuracy, and then train the final model. To learn how we can create such a script, open up the tune_predictor_hyperparams.py file and insert the following code: # import the necessary packages from pyimagesearch import config from sklearn.model_selection import ParameterGrid import multiprocessing import numpy as np import random import time import dlib import cv2 import os Lines 2-10 import our packages including: config : Our configuration. ParameterGrid : Generates an iterable list of parameter combinations. Refer to scikit-learn’s Parameter Grid documentation. multiprocessing : Python’s built-in module for multiprocessing.
https://pyimagesearch.com/2019/12/23/tuning-dlib-shape-predictor-hyperparameters-to-balance-speed-accuracy-and-model-size/
dlib : Davis King’s image processing toolkit which includes a shape predictor implementation. cv2 : OpenCV is used today for image I/O and preprocessing. Let’s now define our function to evaluate our model accuracy: def evaluate_model_acc(xmlPath, predPath): # compute and return the error (lower is better) of the shape # predictor over our testing path return dlib.test_shape_predictor(xmlPath, predPath) Lines 12-15 define a helper utility to evaluate our Mean Average Error (MAE), or more simply, the model accuracy. Just like we have a function that evaluates the accuracy of a model, we also need a method to evaluate the model inference speed: def evaluate_model_speed(predictor, imagePath, tests=10): # initialize the list of timings timings = [] # loop over the number of speed tests to perform for i in range(0, tests): # load the input image and convert it to grayscale image = cv2.imread(config. IMAGE_PATH) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # detect faces in the grayscale frame detector = dlib.get_frontal_face_detector() rects = detector(gray, 1) # ensure at least one face was detected if len(rects) > 0: # time how long it takes to perform shape prediction # using the current shape prediction model start = time.time() shape = predictor(gray, rects[0]) end = time.time() # update our timings list timings.append(end - start) # compute and return the average over the timings return np.average(timings) Our evaluate_model_speed function beginning on Line 17 accepts the following parameters: predictor : The path to the dlib shape/landmark detector. imagePath : Path to an input image. tests : The number of tests to perform and average. Line 19 initializes a list of timings . We’ll work to populate the timings in a loop beginning on Line 22. Inside the loop, we proceed to: Load an image and convert it to grayscale (Lines 24 and 25).
https://pyimagesearch.com/2019/12/23/tuning-dlib-shape-predictor-hyperparameters-to-balance-speed-accuracy-and-model-size/
Perform face detection using dlib’s HOG + Linear SVM face detector (Lines 28 and 29). Ensure at least one face was detected (Line 32). Calculate the inference time for shape/landmark prediction and add the result to timings (Lines 35-40). Finally, we return our timings average to the caller (Line 43). Let’s define a list of columns for our hyperparameter CSV file: # define the columns of our output CSV file cols = [ "tree_depth", "nu", "cascade_depth", "feature_pool_size", "num_test_splits", "oversampling_amount", "oversampling_translation_jitter", "inference_speed", "training_time", "training_error", "testing_error", "model_size" ] Remember, this CSV will hold the values of all the hyperparameters that our script tunes. Lines 46-59 define the columns of the CSV file, including the: Hyperparameter values for a given trial: tree_depth: Controls the tree depth. nu: Regularization parameter to help our model generalize. cascade_depth: Number of cascades to refine and tune the initial predictions. feature_pool_size: Controls the number of pixels used to generate features for the random trees in the cascade. num_test_splits: The number of test splits impacts training time and model accuracy.
https://pyimagesearch.com/2019/12/23/tuning-dlib-shape-predictor-hyperparameters-to-balance-speed-accuracy-and-model-size/
oversampling_amount: Controls the amount of “jitter” to apply when training the shape predictor. oversampling_translation_jitter: Controls the amount of translation “jitter”/augmentation applied to the dataset. Evaluation criteria: inference_speed: Inference speed of the trained shape predictor. training_time: Amount of time it took to train the shape predictor. training_error: Error on the training set. testing_error: Error on the testing set. model_size: The model filesize. Note: Keep reading for a brief review of the hyperparameter values including guidelines on how to initialize them. We then open our output CSV file and write the cols to disk: # open the CSV file for writing and then write the columns as the # header of the CSV file csv = open(config. CSV_PATH, "w") csv.write("{}\n".format(",".join(cols))) # determine the number of processes/threads to use procs = multiprocessing.cpu_count() procs = config.
https://pyimagesearch.com/2019/12/23/tuning-dlib-shape-predictor-hyperparameters-to-balance-speed-accuracy-and-model-size/
PROCS if config. PROCS > 0 else procs Lines 63 and 64 write the cols to the CSV file. Lines 67 and 68 determine the number of processes/threads to use when training. This number is based on the number of CPUs/cores your machine has. My 3GHz Intel Xeon W has 20 cores, but most laptop CPUs will have 2-8 cores. The next code block initializes the set hyperparameters/options as well as corresponding values that we’ll be exploring: # initialize the list of dlib shape predictor hyperparameters that # we'll be tuning over hyperparams = { "tree_depth": list(range(2, 8, 2)), "nu": [0.01, 0.1, 0.25], "cascade_depth": list(range(6, 16, 2)), "feature_pool_size": [100, 250, 500, 750, 1000], "num_test_splits": [20, 100, 300], "oversampling_amount": [1, 20, 40], "oversampling_translation_jitter": [0.0, 0.1, 0.25] } As discussed in last week’s post, there are 7 shape predictor options you’ll want to explore. We reviewed in them in detail last week, but you can find a short summary of each below: tree_depth: There will be 2^tree_depth leaves in each tree. Smaller values of tree_depth will lead to more shallow trees that are faster, but potentially less accurate. Larger values of tree_depth will create deeper trees that are slower, but potentially more accurate. nu: Regularization parameter used to help our model generalize.
https://pyimagesearch.com/2019/12/23/tuning-dlib-shape-predictor-hyperparameters-to-balance-speed-accuracy-and-model-size/
Values closer to 1 will make our model fit the training data closer, but could potentially lead to overfitting. Values closer to 0 will help our model generalize; however, there is a caveat there — the closer nu is to 0 the more training data you will need. cascade_depth: Number of cascades used to refine and tune the initial predictions. This parameter will have a dramatic impact on both the accuracy and the output file size of the model. The more cascades you allow, the larger your model will become (and potentially more accurate). The fewer cascades you allow, the smaller your model will be (but could also result in less accuracy). feature_pool_size: Controls the number of pixels used to generate features for each of the random trees in the cascade. The more pixels you include, the slower your model will run (but could also result in a more accurate shape predictor). The fewer pixels you take into account, the faster your model will run (but could also be less accurate). num_test_splits: Impacts both training time and model accuracy.
https://pyimagesearch.com/2019/12/23/tuning-dlib-shape-predictor-hyperparameters-to-balance-speed-accuracy-and-model-size/
The more num_test_splits you consider, the more likely you’ll have an accurate shape predictor, but be careful! Large values will cause training time to explode and take much longer for the shape predictor training to complete. oversampling_amount: Controls the amount of “jitter” (i.e., data augmentation) to apply when training the shape predictor. Typical values lie in the range [0, 50]. A value of 5, for instance, would result in a 5x increase in your training data. Be careful here as the larger the oversampling_amount, the longer it will take your model to train. oversampling_translation_jitter: Controls the amount of translation jitter/augmentation applied to the dataset. Now that we have the set of hyperparams we’ll be exploring, we need to construct all possible combinations of these options — to do that, we’ll be using scikit-learn’s ParameterGrid class: # construct the set of hyperparameter combinations and randomly # sample them as trying to test *all* of them would be # computationally prohibitive combos = list(ParameterGrid(hyperparams)) random.shuffle(combos) sampledCombos = combos[:config. MAX_TRIALS] print("[INFO] sampling {} of {} possible combinations".format( len(sampledCombos), len(combos))) Given our set of hyperparams on Lines 72-80 above, there will be a total of 6,075 possible combinations that we can explore. On a single machine that would take weeks to explore so we’ll instead randomly sample the parameters to get a reasonable coverage of the possible values.
https://pyimagesearch.com/2019/12/23/tuning-dlib-shape-predictor-hyperparameters-to-balance-speed-accuracy-and-model-size/
Lines 85 and 86 constructs the set of all possible option/value combinations and randomly shuffles them. We then sample MAX_TRIALS combinations (Line 87). Let’s go ahead and loop over our sampledCombos now: # loop over our hyperparameter combinations for (i, p) in enumerate(sampledCombos): # log experiment number print("[INFO] starting trial {}/{}...".format(i + 1, len(sampledCombos))) # grab the default options for dlib's shape predictor and then # set the values based on our current hyperparameter values options = dlib.shape_predictor_training_options() options.tree_depth = p["tree_depth"] options.nu = p["nu"] options.cascade_depth = p["cascade_depth"] options.feature_pool_size = p["feature_pool_size"] options.num_test_splits = p["num_test_splits"] options.oversampling_amount = p["oversampling_amount"] otj = p["oversampling_translation_jitter"] options.oversampling_translation_jitter = otj # tell dlib to be verbose when training and utilize our supplied # number of threads when training options.be_verbose = True options.num_threads = procs Line 99 grabs the default options for dlib’s shape predictor. We need the default option attributes loaded in memory prior to us being able to change them individually. Lines 100-107 set each of the dlib shape predictor hyperparameter options according this particular set of hyperparameters. Lines 111 and 112 tell dlib to be verbose when training and use the configured number of threads (refer to Lines 67 and 68 regarding the number of threads/processes). From here we will train and evaluate our shape predictor with dlib: # train the model using the current set of hyperparameters start = time.time() dlib.train_shape_predictor(config. TRAIN_PATH, config. TEMP_MODEL_PATH, options) trainingTime = time.time() - start # evaluate the model on both the training and testing split trainingError = evaluate_model_acc(config. TRAIN_PATH, config.
https://pyimagesearch.com/2019/12/23/tuning-dlib-shape-predictor-hyperparameters-to-balance-speed-accuracy-and-model-size/
TEMP_MODEL_PATH) testingError = evaluate_model_acc(config. TEST_PATH, config. TEMP_MODEL_PATH) # compute an approximate inference speed using the trained shape # predictor predictor = dlib.shape_predictor(config. TEMP_MODEL_PATH) inferenceSpeed = evaluate_model_speed(predictor, config. IMAGE_PATH) # determine the model size modelSize = os.path.getsize(config. TEMP_MODEL_PATH) Lines 115-118 train our custom dlib shape predictor, including calculating the elapsed training time. We then use the newly trained shape predictor to compute the error on our training and testing splits, respectively (Lines 121-124). To estimate the inferenceSpeed , we determine how long it takes for the shape predictor to perform inference (i.e., given a detected face example image, how long does it take the model to localize the eyes?) via Lines 128-130. Line 133 grabs the filesize of the model.
https://pyimagesearch.com/2019/12/23/tuning-dlib-shape-predictor-hyperparameters-to-balance-speed-accuracy-and-model-size/
Next, we’ll output the hyperparameter options and evaluation metrics to the CSV file: # build the row of data that will be written to our CSV file row = [ p["tree_depth"], p["nu"], p["cascade_depth"], p["feature_pool_size"], p["num_test_splits"], p["oversampling_amount"], p["oversampling_translation_jitter"], inferenceSpeed, trainingTime, trainingError, testingError, modelSize, ] row = [str(x) for x in row] # write the output row to our CSV file csv.write("{}\n".format(",".join(row))) csv.flush() # delete the temporary shape predictor model if os.path.exists(config. TEMP_MODEL_PATH): os.remove(config. TEMP_MODEL_PATH) # close the output CSV file print("[INFO] cleaning up...") csv.close() Lines 136-150 generates a string-based list of the training hyperparameters and evaluation results. We then write the row to disk, delete the temporary model file, and cleanup (Lines 153-162). Again, this loop will run for a maximum of 100 iterations to build our CSV rows of hyperparameter and evaluation data. Had we evaluated all 6,075 combinations, our computer would be churning data for weeks. Exploring the shape predictor hyperparameter space Now that we’ve implemented our Python script to explore dlib’s shape predictor hyperparameter space, let’s put it to work. Make sure you have: Used the “Downloads” section of this tutorial to download the source code. Downloaded the iBUG-300W dataset using the “Downloading the iBUG-300W dataset” section above. Executed the parse_xml.py for both the training and testing XML files in the “Preparing the iBUG-300W dataset for training” section.
https://pyimagesearch.com/2019/12/23/tuning-dlib-shape-predictor-hyperparameters-to-balance-speed-accuracy-and-model-size/
Provided you have accomplished each of these steps, you can now execute the tune_predictor_hyperparams.py script: $ python tune_predictor_hyperparams.py [INFO] sampling 100 of 6075 possible combinations [INFO] starting trial 1/100... ... [INFO] starting trial 100/100... Training with cascade depth: 12 Training with tree depth: 4 Training with 500 trees per cascade level. Training with nu: 0.25 Training with random seed: Training with oversampling amount: 20 Training with oversampling translation jitter: 0.1 Training with landmark_relative_padding_mode: 1 Training with feature pool size: 1000 Training with feature pool region padding: 0 Training with 20 threads. Training with lambda_param: 0.1 Training with 100 split tests. Fitting trees... Training complete Training complete, saved predictor to file temp.dat [INFO] cleaning up... real 3052m50.195s user 30926m32.819s sys 338m44.848s On my iMac Pro with a 3GHz Intel Xeon W processor, the entire training time took ~3,052 minutes which equates to ~2.11 days. Be sure to run the script overnight and plan to check the status in 2-5 days depending on your computational horsepower. After the script completes, you should now have a file named trials.csv in your working directory: $ ls *.csv trials.csv Our trials.csv file contains the results of our experiments. In the next section, we’ll examine this file and use it to select optimal shape predictor options that balance speed, accuracy, and model size. Determining the optimal shape predictor parameters to balance speed, accuracy, and model size At this point, we have our output trials.csv file which contains the combination of (1) input shape predictor options/hyperparameter values and (2) the corresponding output accuracies, inference times, model sizes, etc. Our goal here is to analyze this CSV file and determine the most appropriate values for our particular task. To get started, open up this CSV file in your favorite spreadsheet application (ex.,
https://pyimagesearch.com/2019/12/23/tuning-dlib-shape-predictor-hyperparameters-to-balance-speed-accuracy-and-model-size/
Microsoft Excel, macOS Numbers, Google Sheets, etc.): Figure 4: Hyperparameter tuning a dlib shape predictor produced the following data to analyze in a spreadsheet. We will analyze hyperparameters and evaluation criteria to balance speed, accuracy, and shape predictor model size. Let’s now suppose that my goal is to train and deploy a shape predictor to an embedded device. For embedded devices, our model should: Be as small as possible A small model will also be fast when making predictions, a requirement when working with resource-constrained devices Have reasonable accuracy, but understanding that we need to sacrifice accuracy a bit to have a small, fast model. To identify the optimal hyperparameters for dlib’s shape predictor, I would first sort my spreadsheet by model size: Figure 5: Sort your dlib shape predictors by model size when you are analyzing the results of tuning your model to balance speed, accuracy, and model size. I would then examine the inference_speed, training_error, and testing_error columns, looking for a model that is fast but also has reasonable accuracy. Doing so, I find the following model, bolded and selected in the spreadsheet: Figure 6: After sorting your dlib shape predictor turning by `model_size`, examine the `inference_speed`, `training_error`, and `testing_error` columns, looking for a model that is fast but also has reasonable accuracy. This model is: Only 3.85MB in size In the top-25 in terms of testing error Extremely fast, capable of performing 1,875 predictions in a single second Below I’ve included the shape predictor hyperparameters for this model: tree_depth: 2 nu: 0.25 cascade_depth: 12 feature_pool_size: 500 num_test_splits: 100 oversampling_amount: 20 oversampling_translation_jitter: 0 Updating our shape predictor training script We’re almost done! The last update we need to make is to our train_shape_predictor.py file.
https://pyimagesearch.com/2019/12/23/tuning-dlib-shape-predictor-hyperparameters-to-balance-speed-accuracy-and-model-size/
Open up that file and insert the following code: # import the necessary packages import multiprocessing import argparse import dlib # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-t", "--training", required=True, help="path to input training XML file") ap.add_argument("-m", "--model", required=True, help="path serialized dlib shape predictor model") args = vars(ap.parse_args()) # grab the default options for dlib's shape predictor print("[INFO] setting shape predictor options...") options = dlib.shape_predictor_training_options() # update our hyperparameters options.tree_depth = 2 options.nu = 0.25 options.cascade_depth = 12 options.feature_pool_size = 500 options.num_test_splits = 20 options.oversampling_amount = 20 options.oversampling_translation_jitter = 0 # tell the dlib shape predictor to be verbose and print out status # messages our model trains options.be_verbose = True # number of threads/CPU cores to be used when training -- we default # this value to the number of available cores on the system, but you # can supply an integer value here if you would like options.num_threads = multiprocessing.cpu_count() # log our training options to the terminal print("[INFO] shape predictor options:") print(options) # train the shape predictor print("[INFO] training shape predictor...") dlib.train_shape_predictor(args["training"], args["model"], options) Notice how on Lines 19-25 we have updated our shape predictor options using the optimal values we found in the previous section. The rest of our script takes care of training the shape predictor using these values. For a detailed review of the the train_shape_predictor.py script, be sure to refer to last week’s blog post. Training the dlib shape predictor on our optimal option values Now that we’ve identified our optimal shape predictor options, as well as updated our train_shape_predictor.py file with these values, we can proceed to train our model. Open up a terminal and execute the following command: $ time python train_shape_predictor.py \ --training ibug_300W_large_face_landmark_dataset/labels_ibug_300W_train_eyes.xml \ --model optimal_eye_predictor.dat [INFO] setting shape predictor options... [INFO] shape predictor options: shape_predictor_training_options(be_verbose=1, cascade_depth=12, tree_depth=2, num_trees_per_cascade_level=500, nu=0.25, oversampling_amount=20, oversampling_translation_jitter=0, feature_pool_size=500, lambda_param=0.1, num_test_splits=20, feature_pool_region_padding=0, random_seed=, num_threads=20, landmark_relative_padding_mode=1) [INFO] training shape predictor... Training with cascade depth: 12 Training with tree depth: 2 Training with 500 trees per cascade level. Training with nu: 0.25 Training with random seed: Training with oversampling amount: 20 Training with oversampling translation jitter: 0 Training with landmark_relative_padding_mode: 1 Training with feature pool size: 500 Training with feature pool region padding: 0 Training with 20 threads. Training with lambda_param: 0.1 Training with 20 split tests. Fitting trees... Training complete Training complete, saved predictor to file optimal_eye_predictor.dat real 10m49.273s user 83m6.673s sys 0m47.224s Once trained, we can use the predict_eyes.py file (reviewed in last week’s blog post) to visually validate that our model is working properly:   As you can see, we have trained a dlib shape predictor that: Accurately localizes eyes Is fast in terms of inference/prediction speed Is small in terms of model size You can perform the same analysis when training your own custom dlib shape predictors as well. How can we speed up our shape predictor tuning script?
https://pyimagesearch.com/2019/12/23/tuning-dlib-shape-predictor-hyperparameters-to-balance-speed-accuracy-and-model-size/
Figure 7: Tuning dlib shape predictor hyperparameters allows us to balance speed, accuracy, and model size. The obvious bottleneck here is the tune_predictor_hyperparams.py script — exploring only 1.65% of the possible options took over 2 days to complete. Exploring all of the possible hyperparameters would therefore take months! And keep in mind that we’re training an eyes-only landmark predictor. Had we been training models for all 68 typical landmarks, training would take even longer. In most cases we simply won’t have that much time (or patience). So, what can we do about it? To start, I would suggest reducing your hyperparameter space. For example, let’s assume you are training a dlib shape predictor model to be deployed to an embedded device such as the Raspberry Pi, Google Coral, or NVIDIA Jetson Nano. In those cases you’ll want a model that is fast and small — you therefore know you’ll need to comprise a bit of accuracy to obtain a fast and small model.
https://pyimagesearch.com/2019/12/23/tuning-dlib-shape-predictor-hyperparameters-to-balance-speed-accuracy-and-model-size/
In that situation, you’ll want to avoid exploring areas of the hyperparameter space that will result in models that are larger and slower to make predictions. Consider limiting your tree_depth, cascade_depth, and feature_pool_size explorations and focus on values that will result in a smaller, faster model. Do not confuse deployment with training. You should tune/train your shape predictor on a capable, full-size machine (i.e. not an embedded device). From there, assuming your model is reasonably small for an embedded device, you should then deploy the model to the target device. Secondly, I would suggest leveraging distributed computing. Tuning hyperparameters to a model is a great example of a problem that scales linearly and can be solved by throwing more hardware at it. For example, you could use Amazon, Microsoft, Google’s etc. cloud to spin up multiple machines. Each machine can then be responsible for exploring non-overlapping subsets of the hyperparameters.
https://pyimagesearch.com/2019/12/23/tuning-dlib-shape-predictor-hyperparameters-to-balance-speed-accuracy-and-model-size/
Given N total machines, you can reduce the amount of time it takes to tune your shape predictor options by a factor of N. Of course, we might not have the budget to leverage the cloud, in which case, you should see my first suggestion above. What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do.
https://pyimagesearch.com/2019/12/23/tuning-dlib-shape-predictor-hyperparameters-to-balance-speed-accuracy-and-model-size/
My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial you learned how to automatically tune the options and hyperparameters to dlib’s shape predictor, allowing you to properly balance: Model inference/prediction speed Model accuracy Model size Tuning hyperparameters is very computationally expensive, so it’s recommended that you either: Budget enough time (2-4 days) on your personal laptop or desktop to run the hyperparameter tuning script. Utilize distributed systems and potentially the cloud to spin up multiple systems, each of which crunches on non-overlapping subsets of the hyperparameters. After the tuning script runs you can open up the resulting CSV/Excel file, sort it by which columns you are most interested in (i.e., speed, accuracy, size), and determine your optimal hyperparameters.
https://pyimagesearch.com/2019/12/23/tuning-dlib-shape-predictor-hyperparameters-to-balance-speed-accuracy-and-model-size/
Given the parameters, you found from your sorting you can then update the shape predictor training script and then train your model. I hope you enjoyed today’s tutorial! To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), just enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
Click here to download the source code to this pos In today’s tutorial, you’ll learn how to stream live video over a network with OpenCV. Specifically, you’ll learn how to implement Python + OpenCV scripts to capture and stream video frames from a camera to a server. Every week or so I receive a comment on a blog post or a question over email that goes something like this: Hi Adrian, I’m working on a project where I need to stream frames from a client camera to a server for processing using OpenCV. Should I use an IP camera? Would a Raspberry Pi work? What about RTSP streaming? Have you tried using FFMPEG or GStreamer? How do you suggest I approach the problem? It’s a great question — and if you’ve ever attempted live video streaming with OpenCV then you know there are a ton of different options. You could go with the IP camera route.
https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
But IP cameras can be a pain to work with. Some IP cameras don’t even allow you to access the RTSP (Real-time Streaming Protocol) stream. Other IP cameras simply don’t work with OpenCV’s cv2.VideoCapture function. An IP camera may be too expensive for your budget as well. In those cases, you are left with using a standard webcam — the question then becomes, how do you stream the frames from that webcam using OpenCV? Using FFMPEG or GStreamer is definitely an option. But both of those can be a royal pain to work with. Today I am going to show you my preferred solution using message passing libraries, specifically ZMQ and ImageZMQ, the latter of which was developed by PyImageConf 2018 speaker, Jeff Bass. Jeff has put a ton of work into ImageZMQ and his efforts really shows. As you’ll see, this method of OpenCV video streaming is not only reliable but incredibly easy to use, requiring only a few lines of code.
https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
To learn how to perform live network video streaming with OpenCV, just keep reading! Looking for the source code to this post? Jump Right To The Downloads Section Live video streaming over network with OpenCV and ImageZMQ In the first part of this tutorial, we’ll discuss why, and under which situations, we may choose to stream video with OpenCV over a network. From there we’ll briefly discuss message passing along with ZMQ, a library for high performance asynchronous messaging for distributed systems. We’ll then implement two Python scripts: A client that will capture frames from a simple webcam And a server that will take the input frames and run object detection on them Will be using Raspberry Pis as our clients to demonstrate how cheaper hardware can be used to build a distributed network of cameras capable of piping frames to a more powerful machine for additional processing. By the end of this tutorial, you’ll be able to apply live video streaming with OpenCV to your own applications! Why stream videos/frames over a network? Figure 1: A great application of video streaming with OpenCV is a security camera system. You could use Raspberry Pis and a library called ImageZMQ to stream from the Pi (client) to the server. There are a number of reasons why you may want to stream frames from a video stream over a network with OpenCV.
https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
To start, you could be building a security application that requires all frames to be sent to a central hub for additional processing and logging. Or, your client machine may be highly resource constrained (such as a Raspberry Pi) and lack the necessary computational horsepower required to run computationally expensive algorithms (such as deep neural networks, for example). In these cases, you need a method to take input frames captured from a webcam with OpenCV and then pipe them over the network to another system. There are a variety of methods to accomplish this task (discussed in the introduction of the post), but today we are going to specifically focus on message passing. What is message passing? Figure 2: The concept of sending a message from a process, through a message broker, to other processes. With this method/concept, we can stream video over a network using OpenCV and ZMQ with a library called ImageZMQ. Message passing is a programming paradigm/concept typically used in multiprocessing, distributed, and/or concurrent applications. Using message passing, one process can communicate with one or more other processes, typically using a message broker. Whenever a process wants to communicate with another process, including all other processes, it must first send its request to the message broker.
https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
The message broker receives the request and then handles sending the message to the other process(es). If necessary, the message broker also sends a response to the originating process. As an example of message passing let’s consider a tremendous life event, such as a mother giving birth to a newborn child (process communication depicted in Figure 2 above). Process A, the mother, wants to announce to all other processes (i.e., the family), that she had a baby. To do so, Process A constructs the message and sends it to the message broker. The message broker then takes that message and broadcasts it to all processes. All other processes then receive the message from the message broker. These processes want to show their support and happiness to Process A, so they construct a message saying their congratulations: Figure 3: Each process sends an acknowledgment (ACK) message back through the message broker to notify Process A that the message is received. The ImageZMQ video streaming project by Jeff Bass uses this approach. These responses are sent to the message broker which in turn sends them back to Process A (Figure 3).
https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
This example is a dramatic simplification of message passing and message broker systems but should help you understand the general algorithm and the type of communication the processes are performing. You can very easily get into the weeds studying these topics, including various distributed programming paradigms and types of messages/communication (1:1 communication, 1:many, broadcasts, centralized, distributed, broker-less etc.). As long as you understand the basic concept that message passing allows processes to communicate (including processes on different machines) then you will be able to follow along with the rest of this post. What is ZMQ? Figure 4: The ZMQ library serves as the backbone for message passing in the ImageZMQ library. ImageZMQ is used for video streaming with OpenCV. Jeff Bass designed it for his Raspberry Pi network at his farm. ZeroMQ, or simply ZMQ for short, is a high-performance asynchronous message passing library used in distributed systems. Both RabbitMQ and ZeroMQ are some of the most highly used message passing systems. However, ZeroMQ specifically focuses on high throughput and low latency applications — which is exactly how you can frame live video streaming.
https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
When building a system to stream live videos over a network using OpenCV, you would want a system that focuses on: High throughput: There will be new frames from the video stream coming in quickly. Low latency: As we’ll want the frames distributed to all nodes on the system as soon as they are captured from the camera. ZeroMQ also has the benefit of being extremely easy to both install and use. Jeff Bass, the creator of ImageZMQ (which builds on ZMQ), chose to use ZMQ as the message passing library for these reasons — and I couldn’t agree with him more. The ImageZMQ library Figure 5: The ImageZMQ library is designed for streaming video efficiently over a network. It is a Python package and integrates with OpenCV. Jeff Bass is the owner of Yin Yang Ranch, a permaculture farm in Southern California. He was one of the first people to join PyImageSearch Gurus, my flagship computer vision course. In the course and community he has been an active participant in many discussions around the Raspberry Pi. Jeff has found that Raspberry Pis are perfect for computer vision and other tasks on his farm.
https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
They are inexpensive, readily available, and astoundingly resilient/reliable. At PyImageConf 2018 Jeff spoke about his farm and more specifically about how he used Raspberry Pis and a central computer to manage data collection and analysis. The heart of his project is a library that he put together called ImageZMQ. ImageZMQ solves the problem of real-time streaming from the Raspberry Pis on his farm. It is based on ZMQ and works really well with OpenCV. Plain and simple, it just works. And it works really reliably. I’ve found it to be more reliable than alternatives such as GStreamer or FFMPEG streams. I’ve also had better luck with it than using RTSP streams. You can learn the details of ImageZMQ by studying Jeff’s code on GitHub.
https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
Jeff’s slides from PyImageConf 2018 are also available here. In a few days, I’ll be posting my interview with Jeff Bass on the blog as well. Let’s configure our clients and server with ImageZMQ and put it them to work! Configuring your system and installing required packages Figure 6: To install ImageZMQ for video streaming, you’ll need Python, ZMQ, and OpenCV. Installing ImageZMQ is quite easy. First, let’s pip install a few packages into your Python virtual environment (assuming you’re using one). If you need to set up pip and virtual environments, please refer to my pip install opencv tutorial first. Then use the following commands: $ workon <env_name> # my environment is named py3cv4 $ pip install opencv-contrib-python $ pip install imagezmq $ pip install imutils You must install these packages on both the clients and server. Provided you didn’t encounter any issues you are now ready to move on. Note: On your Raspberry Pi, we recommend installing this version of OpenCV: pip install opencv-contrib-python==4.1.0.25.
https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
Preparing clients for ImageZMQ ImageZMQ must be installed on each client and the central server. In this section, we’ll cover one important difference for clients. Our code is going to use the hostname of the client to identify it. You could use the IP address in a string for identification, but setting a client’s hostname allows you to more easily identify the purpose of the client. In this example, we’ll assume you are using a Raspberry Pi running Raspbian. Of course, your client could run Windows Embedded, Ubuntu, macOS, etc., but since our demo uses Raspberry Pis, let’s learn how to change the hostname on the RPi. To change the hostname on your Raspberry Pi, fire up a terminal (this could be over an SSH connection if you’d like). Then run the raspi-config command: $ sudo raspi-config You’ll be presented with this terminal screen: Figure 7: Configuring a Raspberry Pi hostname with raspi-config. Shown is the raspi-config home screen.
https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
Navigate to “2 Network Options” and press enter. Figure 8: Raspberry Pi raspi-config network settings page. Then choose the option “N1 Hostname”. Figure 9: Setting the Raspberry Pi hostname to something easily identifiable/memorable. Our video streaming with OpenCV and ImageZMQ script will use the hostname to identify Raspberry Pi clients. You can now change your hostname and select “<Ok>”. You will be prompted to reboot — a reboot is required. I recommend naming your Raspberry Pis like this: pi-location . Here are a few examples: pi-garage pi-frontporch pi-livingroom pi-driveway …you get the idea. This way when you pull up your router page on your network, you’ll know what the Pi is for and its corresponding IP address.
https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
On some networks, you could even connect via SSH without providing the IP address like this: $ ssh pi@pi-frontporch As you can see, it will likely save some time later. Defining the client and server relationship Figure 10: The client/server relationship for ImageZMQ video streaming with OpenCV. Before we actually implement network video streaming with OpenCV, let’s first define the client/server relationship to ensure we’re on the same page and using the same terms: Client: Responsible for capturing frames from a webcam using OpenCV and then sending the frames to the server. Server: Accepts frames from all input clients. You could argue back and forth as to which system is the client and which is the server. For example, a system that is capturing frames via a webcam and then sending them elsewhere could be considered a server — the system is undoubtedly serving up frames. Similarly, a system that accepts incoming data could very well be the client. However, we are assuming: There is at least one (and likely many more) system responsible for capturing frames. There is only a single system used for actually receiving and processing those frames. For these reasons, I prefer to think of the system sending the frames as the client and the system receiving/processing the frames as the server.
https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
You may disagree with me, but that is the client-server terminology we’ll be using throughout the remainder of this tutorial. Project structure Be sure to grab the “Downloads” for today’s project. From there, unzip the files and navigate into the project directory. You may use the tree command to inspect the structure of the project: $ tree . ├── MobileNetSSD_deploy.caffemodel ├── MobileNetSSD_deploy.prototxt ├── client.py └── server.py 0 directories, 4 files Note: If you’re going with the third alternative discussed above, then you would need to place the imagezmq source directory in the project as well. The first two files listed in the project are the pre-trained Caffe MobileNet SSD object detection files. The server (server.py ) will take advantage of these Caffe files using OpenCV’s DNN module to perform object detection. The client.py script will reside on each device which is sending a stream to the server. Later on, we’ll upload client.py onto each of the Pis (or another machine) on your network so they can send video frames to the central location. Implementing the client OpenCV video streamer (i.e., video sender) Let’s start by implementing the client which will be responsible for: Capturing frames from the camera (either USB or the RPi camera module) Sending the frames over the network via ImageZMQ Open up the client.py file and insert the following code: # import the necessary packages from imutils.video import VideoStream import imagezmq import argparse import socket import time # construct the argument parser and parse the arguments ap = argparse.
https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
ArgumentParser() ap.add_argument("-s", "--server-ip", required=True, help="ip address of the server to which the client will connect") args = vars(ap.parse_args()) # initialize the ImageSender object with the socket address of the # server sender = imagezmq. ImageSender(connect_to="tcp://{}:5555".format( args["server_ip"])) We start off by importing packages and modules on Lines 2-6: Pay close attention here to see that we’re importing imagezmq in our client-side script. VideoStream will be used to grab frames from our camera. Our argparse import will be used to process a command line argument containing the server’s IP address (--server-ip is parsed on Lines 9-12). The socket module of Python is simply used to grab the hostname of the Raspberry Pi. Finally, time will be used to allow our camera to warm up prior to sending frames. Lines 16 and 17 simply create the imagezmq sender object and specify the IP address and port of the server. The IP address will come from the command line argument that we already established. I’ve found that port 5555 doesn’t usually have conflicts, so it is hardcoded. You could easily turn it into a command line argument if you need to as well.
https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
Let’s initialize our video stream and start sending frames to the server: # get the host name, initialize the video stream, and allow the # camera sensor to warmup rpiName = socket.gethostname() vs = VideoStream(usePiCamera=True).start() #vs = VideoStream(src=0).start() time.sleep(2.0) while True: # read the frame from the camera and send it to the server frame = vs.read() sender.send_image(rpiName, frame) Now, we’ll grab the hostname, storing the value as rpiName (Line 21). Refer to “Preparing clients for ImageZMQ” above to set your hostname on a Raspberry Pi. From there, our VideoStream object is created to connect grab frames from our PiCamera. Alternatively, you can use any USB camera connected to the Pi by commenting Line 22 and uncommenting Line 23. This is the point where you should also set your camera resolution. We are just going to use the maximum resolution so the argument is not provided. But if you find that there is a lag, you are likely sending too many pixels. If that is the case, you may reduce your resolution quite easily. Just pick from one of the resolutions available for the PiCamera V2 here: PiCamera ReadTheDocs. The second table is for V2.
https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
Once you’ve chosen the resolution, edit Line 22 like this: vs = VideoStream(usePiCamera=True, resolution=(320, 240)).start() Note: The resolution argument won’t make a difference for USB cameras since they are all implemented differently. As an alternative, you can insert a frame = imutils.resize(frame, width=320) between Lines 28 and 29 to resize the frame manually. From there, a warmup sleep time of 2.0 seconds is set (Line 24). Finally, our while loop on Lines 26-29 grabs and sends the frames. As you can see, the client is quite simple and straightforward! Let’s move on to the actual server. Implementing the OpenCV video server (i.e., video receiver) The live video server will be responsible for: Accepting incoming frames from multiple clients. Applying object detection to each of the incoming frames. Maintaining an “object count” for each of the frames (i.e., count the number of objects). Let’s go ahead and implement the server — open up the server.py file and insert the following code: # import the necessary packages from imutils import build_montages from datetime import datetime import numpy as np import imagezmq import argparse import imutils import cv2 # construct the argument parser and parse the arguments ap = argparse.
https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
ArgumentParser() ap.add_argument("-p", "--prototxt", required=True, help="path to Caffe 'deploy' prototxt file") ap.add_argument("-m", "--model", required=True, help="path to Caffe pre-trained model") ap.add_argument("-c", "--confidence", type=float, default=0.2, help="minimum probability to filter weak detections") ap.add_argument("-mW", "--montageW", required=True, type=int, help="montage frame width") ap.add_argument("-mH", "--montageH", required=True, type=int, help="montage frame height") args = vars(ap.parse_args()) On Lines 2-8 we import packages and libraries. In this script, most notably we’ll be using: build_montages : To build a montage of all incoming frames. imagezmq : For streaming video from clients. In our case, each client is a Raspberry Pi. imutils : My package of OpenCV and other image processing convenience functions available on GitHub and PyPi. cv2 : OpenCV’s DNN module will be used for deep learning object detection inference. Are you wondering where imutils.video. VideoStream is? We usually use my VideoStream class to read frames from a webcam. However, don’t forget that we’re using imagezmq for streaming frames from clients.
https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
The server doesn’t have a camera directly wired to it. Let’s process five command line arguments with argparse: --prototxt : The path to our Caffe deep learning prototxt file. --model : The path to our pre-trained Caffe deep learning model. I’ve provided MobileNet SSD in the “Downloads” but with some minor changes, you could elect to use an alternative model. --confidence : Our confidence threshold to filter weak detections. --montageW : This is not width in pixels. Rather this is the number of columns for our montage. We’re going to stream from four raspberry Pis today, so you could do 2×2, 4×1, or 1×4. You could also do, for example, 3×3 for nine clients, but 5 of the boxes would be empty. --montageH : The number of rows for your montage.
https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
See the --montageW explanation. Let’s initialize our ImageHub object along with our deep learning object detector: # initialize the ImageHub object imageHub = imagezmq. ImageHub() # initialize the list of class labels MobileNet SSD was trained to # detect, then generate a set of bounding box colors for each class CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"] # load our serialized model from disk print("[INFO] loading model...") net = cv2.dnn.readNetFromCaffe(args["prototxt"], args["model"]) Our server needs an ImageHub to accept connections from each of the Raspberry Pis. It essentially uses sockets and ZMQ for receiving frames across the network (and sending back acknowledgments). Our MobileNet SSD object CLASSES are specified on Lines 29-32. If you aren’t familiar with the MobileNet Single Shot Detector, please refer to this blog post or Deep Learning for Computer Vision with Python. From there we’ll instantiate our Caffe object detector on Line 36. Initializations come next: # initialize the consider set (class labels we care about and want # to count), the object count dictionary, and the frame dictionary CONSIDER = set(["dog", "person", "car"]) objCount = {obj: 0 for obj in CONSIDER} frameDict = {} # initialize the dictionary which will contain information regarding # when a device was last active, then store the last time the check # was made was now lastActive = {} lastActiveCheck = datetime.now() # stores the estimated number of Pis, active checking period, and # calculates the duration seconds to wait before making a check to # see if a device was active ESTIMATED_NUM_PIS = 4 ACTIVE_CHECK_PERIOD = 10 ACTIVE_CHECK_SECONDS = ESTIMATED_NUM_PIS * ACTIVE_CHECK_PERIOD # assign montage width and height so we can view all incoming frames # in a single "dashboard" mW = args["montageW"] mH = args["montageH"] print("[INFO] detecting: {}...".format(", ".join(obj for obj in CONSIDER))) In today’s example, I’m only going to CONSIDER three types of objects from the MobileNet SSD list of CLASSES . We’re considering (1) dogs, (2) persons, and (3) cars on Line 40. We’ll soon use this CONSIDER set to filter out other classes that we don’t care about such as chairs, plants, monitors, or sofas which don’t typically move and aren’t interesting for this security type project.
https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
Line 41 initializes a dictionary for our object counts to be tracked in each video feed. Each count is initialized to zero. A separate dictionary, frameDict is initialized on Line 42. The frameDict dictionary will contain the hostname key and the associated latest frame value. Lines 47 and 48 are variables which help us determine when a Pi last sent a frame to the server. If it has been a while (i.e. there is a problem), we can get rid of the static, out of date image in our montage. The lastActive dictionary will have hostname keys and timestamps for values. Lines 53-55 are constants which help us to calculate whether a Pi is active. Line 55 itself calculates that our check for activity will be 40 seconds. You can reduce this period of time by adjusting ESTIMATED_NUM_PIS and ACTIVE_CHECK_PERIOD on Lines 53 and 54.
https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
Our mW and mH variables on Lines 59 and 60 represent the width and height (columns and rows) for our montage. These values are pulled directly from the command line args dictionary. Let’s loop over incoming streams from our clients and processing the data! # start looping over all the frames while True: # receive RPi name and frame from the RPi and acknowledge # the receipt (rpiName, frame) = imageHub.recv_image() imageHub.send_reply(b'OK') # if a device is not in the last active dictionary then it means # that its a newly connected device if rpiName not in lastActive.keys(): print("[INFO] receiving data from {}...".format(rpiName)) # record the last active time for the device from which we just # received a frame lastActive[rpiName] = datetime.now() We begin looping on Line 65. Lines 68 and 69 grab an image from the imageHub and send an ACK message. The result of imageHub.recv_image is rpiName , in our case the hostname, and the video frame itself. It is really as simple as that to receive frames from an ImageZMQ video stream! Lines 73-78 perform housekeeping duties to determine when a Raspberry Pi was lastActive . Let’s perform inference on a given incoming frame : # resize the frame to have a maximum width of 400 pixels, then # grab the frame dimensions and construct a blob frame = imutils.resize(frame, width=400) (h, w) = frame.shape[:2] blob = cv2.dnn.blobFromImage(cv2.resize(frame, (300, 300)), 0.007843, (300, 300), 127.5) # pass the blob through the network and obtain the detections and # predictions net.setInput(blob) detections = net.forward() # reset the object count for each object in the CONSIDER set objCount = {obj: 0 for obj in CONSIDER} Lines 82-90 perform object detection on the frame : The frame dimensions are computed. A blob is created from the image (see this post for more details about how OpenCV’s blobFromImage function works).
https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
The blob is passed through the neural net. From there, on Line 93 we reset the object counts to zero (we will be populating the dictionary with fresh count values shortly). Let’s loop over the detections with the goal of (1) counting, and (2) drawing boxes around objects that we are considering: # loop over the detections for i in np.arange(0, detections.shape[2]): # extract the confidence (i.e., probability) associated with # the prediction confidence = detections[0, 0, i, 2] # filter out weak detections by ensuring the confidence is # greater than the minimum confidence if confidence > args["confidence"]: # extract the index of the class label from the # detections idx = int(detections[0, 0, i, 1]) # check to see if the predicted class is in the set of # classes that need to be considered if CLASSES[idx] in CONSIDER: # increment the count of the particular object # detected in the frame objCount[CLASSES[idx]] += 1 # compute the (x, y)-coordinates of the bounding box # for the object box = detections[0, 0, i, 3:7] * np.array([w, h, w, h]) (startX, startY, endX, endY) = box.astype("int") # draw the bounding box around the detected object on # the frame cv2.rectangle(frame, (startX, startY), (endX, endY), (255, 0, 0), 2) On Line 96 we begin looping over each of the detections . Inside the loop, we proceed to: Extract the object confidence and filter out weak detections (Lines 99-103). Grab the label idx (Line 106) and ensure that the label is in the CONSIDER set (Line 110). For each detection that has passed the two checks (confidence threshold and in CONSIDER ), we will: Increment the objCount for the respective object (Line 113). Draw a rectangle around the object (Lines 117-123). Next, let’s annotate each frame with the hostname and object counts. We’ll also build a montage to display them in: # draw the sending device name on the frame cv2.putText(frame, rpiName, (10, 25), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2) # draw the object count on the frame label = ", ".join("{}: {}".format(obj, count) for (obj, count) in objCount.items()) cv2.putText(frame, label, (10, h - 20), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255,0), 2) # update the new frame in the frame dictionary frameDict[rpiName] = frame # build a montage using images in the frame dictionary montages = build_montages(frameDict.values(), (w, h), (mW, mH)) # display the montage(s) on the screen for (i, montage) in enumerate(montages): cv2.imshow("Home pet location monitor ({})".format(i), montage) # detect any kepresses key = cv2.waitKey(1) & 0xFF On Lines 126-133 we make two calls to cv2.putText to draw the Raspberry Pi hostname and object counts. From there we update our frameDict with the frame corresponding to the RPi hostname.
https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
Lines 139-144 create and display a montage of our client frames. The montage will be mW frames wide and mH frames tall. Keypresses are captured via Line 147. The last block is responsible for checking our lastActive timestamps for each client feed and removing frames from the montage that have stalled. Let’s see how it works: # if current time *minus* last time when the active device check # was made is greater than the threshold set then do a check if (datetime.now() - lastActiveCheck).seconds > ACTIVE_CHECK_SECONDS: # loop over all previously active devices for (rpiName, ts) in list(lastActive.items()): # remove the RPi from the last active and frame # dictionaries if the device hasn't been active recently if (datetime.now() - ts).seconds > ACTIVE_CHECK_SECONDS: print("[INFO] lost connection to {}".format(rpiName)) lastActive.pop(rpiName) frameDict.pop(rpiName) # set the last active check time as current time lastActiveCheck = datetime.now() # if the `q` key was pressed, break from the loop if key == ord("q"): break # do a bit of cleanup cv2.destroyAllWindows() There’s a lot going on in Lines 151-162. Let’s break it down: We only perform a check if at least ACTIVE_CHECK_SECONDS have passed (Line 151). We loop over each key-value pair in lastActive (Line 153): If the device hasn’t been active recently (Line 156) we need to remove data (Lines 158 and 159). First we remove (pop ) the rpiName and timestamp from lastActive . Then the rpiName and frame are removed from the frameDict . The lastActiveCheck is updated to the current time on Line 162.
https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
Effectively this will help us get rid of expired frames (i.e. frames that are no longer real-time). This is really important if you are using the ImageHub server for a security application. Perhaps you are saving key motion events like a Digital Video Recorder (DVR). The worst thing that could happen if you don’t get rid of expired frames is that an intruder kills power to a client and you don’t realize the frame isn’t updating. Think James Bond or Jason Bourne sort of spy techniques. Last in the loop is a check to see if the "q" key has been pressed — if so we break from the loop and destroy all active montage windows (Lines 165-169). Streaming video over network with OpenCV Now that we’ve implemented both the client and the server, let’s put them to the test. Make sure you use the “Downloads” section of this post to download the source code. From there, upload the client to each of your Pis using SCP: $ scp client.py pi@192.168.1.10:~ $ scp client.py pi@192.168.1.11:~ $ scp client.py pi@192.168.1.12:~ $ scp client.py pi@192.168.1.13:~ In this example, I’m using four Raspberry Pis, but four aren’t required — you can use more or less. Be sure to use applicable IP addresses for your network.
https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
You also need to follow the installation instructions to install ImageZMQ on each Raspberry Pi. See the “Configuring your system and installing required packages” section in this blog post. Before we start the clients, we must start the server. Let’s fire it up with the following command: $ python server.py --prototxt MobileNetSSD_deploy.prototxt \ --model MobileNetSSD_deploy.caffemodel --montageW 2 --montageH 2 Once your server is running, go ahead and start each client pointing to the server. Here is what you need to do on each client, step-by-step: Open an SSH connection to the client: ssh pi@192.168.1.10 Start screen on the client: screen Source your profile: source ~/.profile Activate your environment: workon py3cv4 Install ImageZMQ using instructions in “Configuring your system and installing required packages”. Run the client: python client.py --server-ip 192.168.1.5 As an alternative to these steps, you may start the client script on reboot. Automagically, your server will start bringing in frames from each of your Pis. Each frame that comes in is passed through the MobileNet SSD. Here’s a quick demo of the result: A full video demo can be seen below:   What's next? We recommend PyImageSearch University.
https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects.
https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned how to stream video over a network using OpenCV and the ImageZMQ library. Instead of relying on IP cameras or FFMPEG/GStreamer, we used a simple webcam and a Raspberry Pi to capture input frames and then stream them to a more powerful machine for additional processing using a distributed system concept called message passing. Thanks to Jeff Bass’ hard work (the creator of ImageZMQ) our implementation required only a few lines of code. If you are ever in a situation where you need to stream live video over a network, definitely give ImageZMQ a try — I think you’ll find it super intuitive and easy to use. I’ll be back in a few days with an interview with Jeff Bass as well! To download the source code to this post, and be notified when future tutorials are published here on PyImageSearch, just enter your email address in the form below!
https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2019/04/22/getting-started-with-google-corals-tpu-usb-accelerator/
Click here to download the source code to this pos In this tutorial, you will learn how to configure your Google Coral TPU USB Accelerator on Raspberry Pi and Ubuntu. You’ll then learn how to perform classification and object detection using Google Coral’s USB Accelerator. A few weeks ago, Google released “Coral”, a super fast, “no internet required” development board and USB accelerator that enables deep learning practitioners to deploy their models “on the edge” and “closer to the data”. Using Coral, deep learning developers are no longer required to have an internet connection, meaning that the Coral TPU is fast enough to perform inference directly on the device rather than sending the image/frame to the cloud for inference and prediction. The Google Coral comes in two flavors: A single-board computer with an onboard Edge TPU. The dev board could be thought of an “advanced Raspberry Pi for AI” or a competitor to NVIDIA’s Jetson Nano. A USB accelerator that plugs into a device (such as a Raspberry Pi). The USB stick includes an Edge TPU built into it. Think of Google’s Coral USB Accelerator as a competitor to Intel’s Movidius NCS. Today we’ll be focusing on the Coral USB Accelerator as it’s easier to get started with (and it fits nicely with our theme of Raspberry Pi-related posts the past few weeks).
https://pyimagesearch.com/2019/04/22/getting-started-with-google-corals-tpu-usb-accelerator/
To learn how to configure your Google Coral USB Accelerator (and perform classification + object detection), just keep reading! Getting started with Google Coral’s TPU USB Accelerator Figure 1: The Google Coral TPU Accelerator adds deep learning capability to resource-constrained devices like the Raspberry Pi (source). In this post I’ll be assuming that you have: Your Google Coral USB Accelerator stick A fresh install of a Debian-based Linux distribution (i.e., Raspbian, Ubuntu, etc.) Understand basic Linux commands and file paths If you don’t already own a Google Coral Accelerator, you can purchase one via Google’s official website. I’ll be configuring the Coral USB Accelerator on Raspbian, but again, provided that you have a Debian-based OS, these commands will still work. Let’s get started! Update 2019-12-30: Installation steps 1-6 have been completely refactored and updated to align with Google’s recommended instructions for installing Coral’s EdgeTPU runtime library. My main contribution is the addition of Python virtual environments. I’ve also updated the section on how to run the example scripts. Step #1: Installing the Coral EdgeTPU Runtime and Python API In this step, we will use your Aptitude package manager to install Google Coral’s Debian/Raspbian-compatible package.
https://pyimagesearch.com/2019/04/22/getting-started-with-google-corals-tpu-usb-accelerator/
First, let’s add the package repository: $ echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list $ curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - $ sudo apt-get update Note: Be careful with the line-wrapping and ensure that you copy each full command + enter in your terminal as shown. Now we’re ready to install the EdgeTPU runtime library: $ sudo apt-get install libedgetpu1-std Followed by installing the EdgeTPU Python API: $ sudo apt-get install python3-edgetpu Step #2: Reboot your device Rebooting your Raspberry Pi or computer is critical for the installation to complete. You can use the following command: $ sudo reboot now Step #3: Setting up your Google Coral virtual environment We’ll be using Python virtual environments, a best practice when working with Python. A Python virtual environment is an isolated development/testing/production environment on your system — it is fully sequestered from other environments. Best of all, you can manage the Python packages inside your your virtual environment inside with pip (Python’s package manager). Of course, there are alternatives for managing virtual environments and packages (namely Anaconda/conda and venv). I’ve used/tried them all, but have settled on pip, virtualenv, and virtualenvwrapper as the preferred tools that I install on all of my systems. If you use the same tools as me, you’ll receive the best support from me. You can install pip using the following commands: $ wget https://bootstrap.pypa.io/get-pip.py $ sudo python get-pip.py $ sudo python3 get-pip.py $ sudo rm -rf ~/.cache/pip Let’s install virtualenv and virtualenvwrapper now: $ sudo pip install virtualenv virtualenvwrapper Once both virtualenv and virtualenvwrapper have been installed, open up your ~/.bashrc file: $ nano ~/.bashrc …and append the following lines to the bottom of the file: # virtualenv and virtualenvwrapper export WORKON_HOME=$HOME/.virtualenvs export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3 source /usr/local/bin/virtualenvwrapper.sh Save and exit via ctrl + x , y , enter . From there, reload your ~/.bashrc file to apply the changes to your current bash session: $ source ~/.bashrc Next, create your Python 3 virtual environment: $ mkvirtualenv coral -p python3 Here we are creating a Python virtual environment named coral using Python 3.
https://pyimagesearch.com/2019/04/22/getting-started-with-google-corals-tpu-usb-accelerator/
Going forward, I recommend Python 3. Note: Python 3 will reach end of its life on January 1st, 2020 so I do not recommend using Python 2.7. Step #4: Sym-link the EdgeTPU runtime into your coral virtual environment A symbolic link is a virtual link from one file/folder to another file/folder. You can learn more on Wikipedia’s article. We will create a symbolic link from the system packages folder containing the EdgeTPU runtime library to our virtual environment. First, let’s find the path where the Python EdgeTPU package is installed: $ dpkg -L python3-edgetpu /. /usr /usr/lib /usr/lib/python3 /usr/lib/python3/dist-packages /usr/lib/python3/dist-packages/edgetpu /usr/lib/python3/dist-packages/edgetpu/__init__.py /usr/lib/python3/dist-packages/edgetpu/basic /usr/lib/python3/dist-packages/edgetpu/basic/__init__.py /usr/lib/python3/dist-packages/edgetpu/basic/basic_engine.py /usr/lib/python3/dist-packages/edgetpu/basic/edgetpu_utils.py /usr/lib/python3/dist-packages/edgetpu/classification /usr/lib/python3/dist-packages/edgetpu/classification/__init__.py /usr/lib/python3/dist-packages/edgetpu/classification/engine.py /usr/lib/python3/dist-packages/edgetpu/detection /usr/lib/python3/dist-packages/edgetpu/detection/__init__.py /usr/lib/python3/dist-packages/edgetpu/detection/engine.py /usr/lib/python3/dist-packages/edgetpu/learn /usr/lib/python3/dist-packages/edgetpu/learn/__init__.py /usr/lib/python3/dist-packages/edgetpu/learn/backprop /usr/lib/python3/dist-packages/edgetpu/learn/backprop/__init__.py /usr/lib/python3/dist-packages/edgetpu/learn/backprop/ops.py /usr/lib/python3/dist-packages/edgetpu/learn/backprop/softmax_regression.py /usr/lib/python3/dist-packages/edgetpu/learn/imprinting /usr/lib/python3/dist-packages/edgetpu/learn/imprinting/__init__.py /usr/lib/python3/dist-packages/edgetpu/learn/imprinting/engine.py /usr/lib/python3/dist-packages/edgetpu/learn/utils.py /usr/lib/python3/dist-packages/edgetpu/swig /usr/lib/python3/dist-packages/edgetpu/swig/__init__.py /usr/lib/python3/dist-packages/edgetpu/swig/_edgetpu_cpp_wrapper.cpython-35m-arm-linux-gnueabihf.so /usr/lib/python3/dist-packages/edgetpu/swig/_edgetpu_cpp_wrapper.cpython-36m-arm-linux-gnueabihf.so /usr/lib/python3/dist-packages/edgetpu/swig/_edgetpu_cpp_wrapper.cpython-37m-arm-linux-gnueabihf.so /usr/lib/python3/dist-packages/edgetpu/swig/edgetpu_cpp_wrapper.py /usr/lib/python3/dist-packages/edgetpu/utils /usr/lib/python3/dist-packages/edgetpu/utils/__init__.py /usr/lib/python3/dist-packages/edgetpu/utils/dataset_utils.py /usr/lib/python3/dist-packages/edgetpu/utils/image_processing.py /usr/lib/python3/dist-packages/edgetpu/utils/warning.py /usr/lib/python3/dist-packages/edgetpu-2.12.2.egg-info /usr/lib/python3/dist-packages/edgetpu-2.12.2.egg-info/PKG-INFO /usr/lib/python3/dist-packages/edgetpu-2.12.2.egg-info/dependency_links.txt /usr/lib/python3/dist-packages/edgetpu-2.12.2.egg-info/requires.txt /usr/lib/python3/dist-packages/edgetpu-2.12.2.egg-info/top_level.txt /usr/share /usr/share/doc /usr/share/doc/python3-edgetpu /usr/share/doc/python3-edgetpu/changelog. Debian.gz /usr/share/doc/python3-edgetpu/copyright Notice in the command’s output on Line 7 that we have found the root directory of the edgetpu library to be: /usr/lib/python3/dist-packages/edgetpu. We will create a sym-link to that path from our virtual environment site-packages. Let’s create our sym-link now: $ cd ~/.virtualenvs/coral/lib/python3.7/site-packages $ ln -s /usr/lib/python3/dist-packages/edgetpu/ edgetpu $ cd ~ Step #5: Test your Coral EdgeTPU installation Let’s fire up a Python shell to test our Google Coral installation: $ workon coral $ python >>> import edgetpu >>> edgetpu.__version__ '2.12.2' Step #5b: Optional Python packages you may wish to install for the Google Coral As you go down the path of working with your Google Coral, you’ll find that you need a handful of other packages installed in your virtual environment. Let’s install packages for working with the PiCamera (Raspberry Pi only) and image processing: $ workon coral $ pip install "picamera[array]" # Raspberry Pi only $ pip install numpy $ pip install opencv-contrib-python==4.1.0.25 $ pip install imutils $ pip install scikit-image $ pip install pillow Step #6: Install EdgeTPU examples Now that we’ve installed the TPU runtime library, let’s put the Coral USB Accelerator to the test!
https://pyimagesearch.com/2019/04/22/getting-started-with-google-corals-tpu-usb-accelerator/
First let’s install the EdgeTPU Examples package: $ sudo apt-get install edgetpu-examples From there, we’ll need to add write permissions to the examples directory: $ sudo chmod a+w /usr/share/edgetpu/examples Project Structure The examples for today’s tutorial are self-contained and do not require an additional download. Go ahead and activate your environment and change into the examples directory: $ workon coral $ cd /usr/share/edgetpu/examples The examples directory contains directories for images and models along with a selection of Python scripts. Let’s inspect our project structure with the tree command: $ tree --dirsfirst . ├── images │   ├── bird.bmp │   ├── cat.bmp │   ├── COPYRIGHT │   ├── grace_hopper.bmp │   ├── parrot.jpg │   └── sunflower.bmp ├── models │   ├── coco_labels.txt │   ├── deeplabv3_mnv2_pascal_quant_edgetpu.tflite │   ├── inat_bird_labels.txt │   ├── mobilenet_ssd_v1_coco_quant_postprocess_edgetpu.tflite │   ├── mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite │   ├── mobilenet_ssd_v2_face_quant_postprocess_edgetpu.tflite │   └── mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite ├── backprop_last_layer.py ├── classify_capture.py ├── classify_image.py ├── imprinting_learning.py ├── object_detection.py ├── semantic_segmetation.py └── two_models_inference.py 2 directories, 20 files We will be using the following MobileNet-based TensorFlow Lite models in the next section: mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite : Classification model trained on the iNaturalist (iNat) Birds dataset. mobilenet_ssd_v2_face_quant_postprocess_edgetpu.tflite : Face detection model. mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite : Object detection model trained on the COCO dataset. The first model will be used with the classify_image.py classification Python script. Both models 2 and 3 will be used with the object_detection.py Python script for object detection. Keep in mind that face detection is a form of object detection. Classification, object detection, and face detection using the Google Coral USB Accelerator At this point we are ready to put our Google Coral coprocessor to the test!
https://pyimagesearch.com/2019/04/22/getting-started-with-google-corals-tpu-usb-accelerator/
Let’s start by performing a simple image classification example: $ python classify_image.py \ --mode models/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite \ --label models/inat_bird_labels.txt \ --image images/parrot.jpg --------------------------- Ara macao (Scarlet Macaw) Score : 0.61328125 --------------------------- Platycercus elegans (Crimson Rosella) Score : 0.15234375 Figure 2: Getting started with Google’s Coral TPU accelerator and the Raspberry Pi to perform bird classification. As you can see, MobileNet (trained on iNat Birds) has correctly labeled the image as “Macaw”, a type of parrot. Let’s try a second classification example: $ python classify_image.py \ --mode models/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite \ --label models/inat_bird_labels.txt \ --image images/bird.bmp --------------------------- Poecile carolinensis (Carolina Chickadee) Score : 0.37109375 --------------------------- Poecile atricapillus (Black-capped Chickadee) Score : 0.29296875 Figure 3: Bird classification using Python and the Google Coral. Read this tutorial to get started with Google’s Coral TPU accelerator and the Raspberry Pi. Learn to install the necessary software and run example code. Notice that the image of the Chickadee has been correctly classified. In fact, the top two results are both forms of Chickadees: (1) Carolina, and (2) Black-capped. Now let’s try performing face detection using the Google Coral USB Accelerator: $ python object_detection.py \ --mode models/mobilenet_ssd_v2_face_quant_postprocess_edgetpu.tflite \ --input images/grace_hopper.bmp ----------------------------------------- score = 0.99609375 box = [143.88912090659142, 40.834905445575714, 381.8060402870178, 365.49142384529114] Please check object_detection_result.jpg Figure 4: Face detection with the Google Coral and Raspberry Pi is very fast. Read this tutorial to get started. Here the MobileNet + SSD face detector was able to detect Grace Hopper’s face in the image.
https://pyimagesearch.com/2019/04/22/getting-started-with-google-corals-tpu-usb-accelerator/
There is a very faint red box around Grace’s face (I recommend clicking the image to enlarge it so that you can see the face detection box). In the future, we will learn how to perform custom object detection during which time you can draw a thicker detection box. The next example shows how to perform object detection using a MobileNet + SSD trained on the COCO dataset: $ python object_detection.py \ --mode models/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite \ --input images/cat.bmp ----------------------------------------- score = 0.96484375 box = [52.70467400550842, 37.87856101989746, 448.4963893890381, 391.3172245025635] ----------------------------------------- score = 0.2109375 box = [0.0, 0.5118846893310547, 191.08786582946777, 194.69362497329712] ----------------------------------------- score = 0.2109375 box = [300.4741072654724, 38.08128833770752, 382.5985550880432, 169.52738761901855] ----------------------------------------- score = 0.16015625 box = [359.85671281814575, 46.61980867385864, 588.858425617218, 357.5845241546631] ----------------------------------------- score = 0.16015625 box = [0.0, 10.966479778289795, 191.53071641921997, 378.33733558654785] ----------------------------------------- score = 0.12109375 box = [126.62454843521118, 4.192984104156494, 591.4307713508606, 262.3262882232666] ----------------------------------------- score = 0.12109375 box = [427.05928087234497, 84.77717638015747, 600.0, 332.24596977233887] ----------------------------------------- score = 0.08984375 box = [258.74093770980835, 3.4015893936157227, 600.0, 215.32137393951416] ----------------------------------------- score = 0.08984375 box = [234.9416971206665, 33.762264251708984, 594.8572397232056, 383.5402488708496] ----------------------------------------- score = 0.08984375 box = [236.90505623817444, 51.90783739089966, 407.265830039978, 130.80371618270874] Please check object_detection_result.jpg Figure 5: Getting started with object detection using the Google Coral EdgeTPU USB Accelerator device. Notice there are ten detections in Figure 5 (faint red boxes; click to enlarge), but only one cat in the image — why is that? The reason is that the object_detection.py script is not filtering on a minimum probability. You could easily modify the script to ignore detections with < 50% probability (we’ll work on custom object detection with the Google coral next month). For fun, I decided to try an image that was not included in the example TPU runtime library demos. Here’s an example of applying the face detector to a custom image: $ python object_detection.py \ --mode models/mobilenet_ssd_v2_face_quant_postprocess_edgetpu.tflite \ --input ~/IMG_7687.jpg ----------------------------------------- score = 0.98046875 box = [190.66683948040009, 0.0, 307.4474334716797, 125.00646710395813] Figure 6: Testing face detection (using my own face) with the Google Coral and Raspberry Pi. Sure enough, my face is detected! Finally, here’s an example of running the MobileNet + SSD on the same image: $ python object_detection.py \ --mode models/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite \ --label models/coco_labels.txt \ --input ~/IMG_7687.jpg ----------------------------------------- person score = 0.87890625 box = [58.70787799358368, 10.639026761054993, 371.2196350097656, 494.61638927459717] ----------------------------------------- dog score = 0.58203125 box = [50.500258803367615, 358.102411031723, 162.57299482822418, 500.0] ----------------------------------------- dog score = 0.33984375 box = [13.502731919288635, 287.04309463500977, 152.83603966236115, 497.8201985359192] ----------------------------------------- couch score = 0.26953125 box = [0.0, 88.88640999794006, 375.0, 423.55993390083313] ----------------------------------------- couch score = 0.16015625 box = [3.753773868083954, 64.79595601558685, 201.68977975845337, 490.678071975708] ----------------------------------------- dog score = 0.12109375 box = [65.94736874103546, 335.2701663970947, 155.95845878124237, 462.4992609024048] ----------------------------------------- dog score = 0.12109375 box = [3.5936199128627777, 335.3758156299591, 118.05401742458344, 497.33099341392517] ----------------------------------------- couch score = 0.12109375 box = [49.873560667037964, 97.65596687793732, 375.0, 247.15487658977509] ----------------------------------------- dog score = 0.12109375 box = [92.47469902038574, 338.89272809028625, 350.16247630119324, 497.23270535469055] ----------------------------------------- couch score = 0.12109375 box = [20.54794132709503, 99.93192553520203, 375.0, 369.604617357254] Figure 7: An example of running the MobileNet SSD object detector on the Google Coral + Raspberry Pi.
https://pyimagesearch.com/2019/04/22/getting-started-with-google-corals-tpu-usb-accelerator/
Again, we can improve results by filtering on a minimum probability to remove the extraneous detections. Doing so would leave only two detections: person (87.89%) and dog (58.20%). What about training custom models for the Google’s Coral? You’ll notice that I’m only using pre-trained deep learning models on the Google Coral in this post — what about custom models that you train yourself? Google does provide some documentation on that but it’s much more advanced, far too much for me to include in this blog post. If you’re interested in learning how to train your own custom models for Google’s Coral I would recommend you take a look at my upcoming book, Raspberry Pi for Computer Vision (Complete Bundle) where I’ll be covering the Google Coral in detail. How do I use Google Coral’s Python runtime library in my own custom scripts? Using the edgetpu library in conjunction with OpenCV and your own custom Python scripts is outside the scope of this post. I cover custom Python scripts for Google Coral classification and object detection next month as well as in my Raspberry Pi for Computer Vision book. Thoughts, tips, and suggestions when using Google’s TPU USB Accelerator Overall, I really liked the Coral USB Accelerator.
https://pyimagesearch.com/2019/04/22/getting-started-with-google-corals-tpu-usb-accelerator/
I thought it was super easy to configure and install, and while not all the demos ran out of the box, with some basic knowledge of file paths, I was able to get them running in a few minutes. In the future, I would like to see the Google TPU runtime library more compatible with Python virtual environments. Requiring the sym-link isn’t ideal. I’ll also add that inference on the Raspberry Pi is a bit slower than what’s advertised by the Google Coral TPU Accelerator — that’s actually not a problem with the TPU Accelerator, but rather the Raspberry Pi. What do I mean by that? Keep in mind that the Raspberry Pi 3B+ uses USB 2.0 but for more optimal inference speeds the Google Coral USB Accelerator recommends USB 3. Since the RPi 3B+ doesn’t have USB 3, that’s not much we can do about that until the RPi 4 comes out — once it does, we’ll have even faster inference on the Pi using the Coral USB Accelerator. Update 2019-12-30: The Raspberry Pi 4B includes USB 3.0 capability. The total time it takes to transfer an image, perform inference, and obtain results is much faster. Be sure to refer to Chapter 23.2 “Benchmarking and Profiling your Scripts” inside Raspberry Pi for Computer Vision to learn how to benchmark your deep learning scripts on the Raspberry Pi.
https://pyimagesearch.com/2019/04/22/getting-started-with-google-corals-tpu-usb-accelerator/
Finally, I’ll note that once or twice during the object detection examples it appeared that the Coral USB Accelerator “locked up” and wouldn’t perform inference (I think it got “stuck” trying to load the model), forcing me to ctrl + c out of the script. Killing the script must have prevented a critical “shut down” script to run on the Coral — any subsequent executions of the demo Python scripts would result in an error. To fix the problem I had to unplug the Coral USB accelerator and then plug it back in. Again, I’m not sure why that happened and I couldn’t find any documentation on the Google Coral site that referenced the issue. What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science?
https://pyimagesearch.com/2019/04/22/getting-started-with-google-corals-tpu-usb-accelerator/
That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc.
https://pyimagesearch.com/2019/04/22/getting-started-with-google-corals-tpu-usb-accelerator/
Click here to join PyImageSearch University   Summary In this tutorial, you learned how to get started with the Google Coral USB Accelerator. We started by installing the Edge TPU runtime library on your Debian-based operating system (we specifically used Raspbian for the Raspberry Pi). After that, we learned how to run the example demo scripts included in the Edge TPU library download. We also learned how to install the edgetpu library into a Python virtual environment (that way we can keep our packages/projects nice and tidy). We wrapped up the tutorial by discussing some of my thoughts, feedback, and suggestions when using the Coral USB Accelerator (be sure to refer them first if you have any questions). I hope you enjoyed this tutorial! To be notified when future tutorials are published here on PyImageSearch, just enter your email address in the form below!
https://pyimagesearch.com/2019/05/06/getting-started-with-the-nvidia-jetson-nano/
Click here to download the source code to this pos In this tutorial, you will learn how to get started with your NVIDIA Jetson Nano, including: First boot Installing system packages and prerequisites Configuring your Python development environment Installing Keras and TensorFlow on the Jetson Nano Changing the default camera Classification and object detection with the Jetson Nano I’ll also provide my commentary along the way, including what tripped me up when I set up my Jetson Nano, ensuring you avoid the same mistakes I made. By the time you’re done with this tutorial, your NVIDIA Jetson Nano will be configured and ready for deep learning! To learn how to get started with the NVIDIA Jetson Nano, just keep reading! Getting started with the NVIDIA Jetson Nano Figure 1: In this blog post, we’ll get started with the NVIDIA Jetson Nano, an AI edge device capable of 472 GFLOPS of computation. At around $100 USD, the device is packed with capability including a Maxwell architecture 128 CUDA core GPU covered up by the massive heatsink shown in the image. ( image source) In the first part of this tutorial, you will learn how to download and flash the NVIDIA Jetson Nano .img file to your micro-SD card. I’ll then show you how to install the required system packages and prerequisites. From there you will configure your Python development library and learn how to install the Jetson Nano-optimized version of Keras and TensorFlow on your device. I’ll then show you how to access the camera on your Jetson Nano and even perform image classification and object detection on the Nano as well. We’ll then wrap up the tutorial with a brief discussion on the Jetson Nano — a full benchmark and comparison between the NVIDIA Jetson Nano, Google Coral, and Movidius NCS will be published in a future blog post.
https://pyimagesearch.com/2019/05/06/getting-started-with-the-nvidia-jetson-nano/
Before you get started with the Jetson Nano Before you can even boot up your NVIDIA Jetson Nano you need three things: A micro-SD card (minimum 16GB) A 5V 2.5A MicroUSB power supply An ethernet cable I really want to stress the minimum of a 16GB micro-SD card. The first time I configured my Jetson Nano I used a 16GB card, but that space was eaten up fast, particularly when I installed the Jetson Inference library which will download a few gigabytes of pre-trained models. I, therefore, recommend a 32GB micro-SD card for your Nano. Secondly, when it comes to your 5V 2.5A MicroUSB power supply, in their documentation NVIDIA specifically recommends this one from Adafruit. Finally, you will need an ethernet cable when working with the Jetson Nano which I find really, really frustrating. The NVIDIA Jetson Nano is marketed as being a powerful IoT and edge computing device for Artificial Intelligence… …and if that’s the case, why is there not a WiFi adapter on the device? I don’t understand NVIDIA’s decision there and I don’t believe it should be up to the end user of the product to “bring their own WiFi adapter”. If the goal is to bring AI to IoT and edge computing then there should be WiFi. But I digress. You can read more about NVIDIA’s recommendations for the Jetson Nano here.
https://pyimagesearch.com/2019/05/06/getting-started-with-the-nvidia-jetson-nano/
Download and flash the .img file to your micro-SD card Before we can get started installing any packages or running any demos on the Jetson Nano, we first need to download the Jetson Nano Developer Kit SD Card Image from NVIDIA’s website. NVIDIA provides documentation for flashing the .img file to a micro-SD card for Windows, macOS, and Linux — you should choose the flash instructions appropriate for your particular operating system. First boot of the NVIDIA Jetson Nano After you’ve downloaded and flashed the .img file to your micro-SD card, insert the card into the micro-SD card slot. I had a hard time finding the card slot — it’s actually underneath the heat sync, right where my finger is pointing to: Figure 2: Where is the microSD card slot on the NVIDIA Jetson Nano? The microSD receptacle is hidden under the heatsink as shown in the image. I think NVIDIA could have made the slot a bit more obvious, or at least better documented it on their website. After sliding the micro-SD card home, connect your power supply and boot. Assuming your Jetson Nano is connected to an HDMI output, you should see the following (or similar) displayed to your screen: Figure 3: To get started with the NVIDIA Jetson Nano AI device, just flash the .img (preconfigured with Jetpack) and boot. From here we’ll be installing TensorFlow and Keras in a virtual environment. The Jetson Nano will then walk you through the install process, including setting your username/password, timezone, keyboard layout, etc.
https://pyimagesearch.com/2019/05/06/getting-started-with-the-nvidia-jetson-nano/
Installing system packages and prerequisites In the remainder of this guide, I’ll be showing you how to configure your NVIDIA Jetson Nano for deep learning, including: Installing system package prerequisites. Installing Keras and TensorFlow and Keras on the Jetson Nano. Installing the Jetson Inference engine. Let’s get started by installing the required system packages: $ sudo apt-get install git cmake $ sudo apt-get install libatlas-base-dev gfortran $ sudo apt-get install libhdf5-serial-dev hdf5-tools $ sudo apt-get install python3-dev Provided you have a good internet connection, the above commands should only take a few minutes to finish up. Configuring your Python environment The next step is to configure our Python development environment. Let’s first install pip, Python’s package manager: $ wget https://bootstrap.pypa.io/get-pip.py $ sudo python3 get-pip.py $ rm get-pip.py We’ll be using Python virtual environments in this guide to keep our Python development environments independent and separate from each other. Using Python virtual environments are a best practice and will help you avoid having to maintain a micro-SD for each development environment you want to use on your Jetson Nano. To manage our Python virtual environments we’ll be using virtualenv and virtualenvwrapper which we can install using the following command: $ sudo pip install virtualenv virtualenvwrapper Once we’ve installed virtualenv and virtualenvwrapper we need to update our ~/.bashrc file. I’m choosing to use nano but you can use whatever editor you are most comfortable with: $ nano ~/.bashrc Scroll down to the bottom of the ~/.bashrc file and add the following lines: # virtualenv and virtualenvwrapper export WORKON_HOME=$HOME/.virtualenvs export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3 source /usr/local/bin/virtualenvwrapper.sh After adding the above lines, save and exit the editor. Next, we need to reload the contents of the ~/.bashrc file using the source command: $ source ~/.bashrc We can now create a Python virtual environment using the mkvirtualenv command — I’m naming my virtual environment deep_learning, but you can name it whatever you would like: $ mkvirtualenv deep_learning -p python3 Installing TensorFlow and Keras on the NVIDIA Jetson Nano Before we can install TensorFlow and Keras on the Jetson Nano, we first need to install NumPy.
https://pyimagesearch.com/2019/05/06/getting-started-with-the-nvidia-jetson-nano/
First, make sure you are inside the deep_learning virtual environment by using the workon command: $ workon deep_learning From there, you can install NumPy: $ pip install numpy Installing NumPy on my Jetson Nano took ~10-15 minutes to install as it had to be compiled on the system (there currently no pre-built versions of NumPy for the Jetson Nano). The next step is to install Keras and TensorFlow on the Jetson Nano. You may be tempted to do a simple pip install tensorflow-gpu — do not do this! Instead, NVIDIA has provided an official release of TensorFlow for the Jetson Nano. You can install the official Jetson Nano TensorFlow by using the following command: $ pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu==1.13.1+nv19.3 Installing NVIDIA’s tensorflow-gpu package took ~40 minutes on my Jetson Nano. The final step here is to install SciPy and Keras: $ pip install scipy $ pip install keras These installs took ~35 minutes. Compiling and installing Jetson Inference on the Nano The Jetson Nano .img already has JetPack installed so we can jump immediately to building the Jetson Inference engine. The first step is to clone down the jetson-inference repo: $ git clone https://github.com/dusty-nv/jetson-inference $ cd jetson-inference $ git submodule update --init We can then configure the build using cmake. $ mkdir build $ cd build $ cmake .. There are two important things to note when running cmake: The cmake command will ask for root permissions so don’t walk away from the Nano until you’ve provided your root credentials. During the configure process, cmake will also download a few gigabytes of pre-trained sample models.
https://pyimagesearch.com/2019/05/06/getting-started-with-the-nvidia-jetson-nano/
Make sure you have a few GB to spare on your micro-SD card! ( This is also why I recommend a 32GB microSD card instead of a 16GB card). After cmake has finished configuring the build, we can compile and install the Jetson Inference engine: $ make $ sudo make install Compiling and installing the Jetson Inference engine on the Nano took just over 3 minutes. What about installing OpenCV? I decided to cover installing OpenCV on a Jetson Nano in a future tutorial. There are a number of cmake configurations that need to be set to take full advantage of OpenCV on the Nano, and frankly, this post is long enough as is. Again, I’ll be covering how to configure and install OpenCV on a Jetson Nano in a future tutorial. Running the NVIDIA Jetson Nano demos When using the NVIDIA Jetson Nano you have two options for input camera devices: A CSI camera module, such as the Raspberry Pi camera module (which is compatible with the Jetson Nano, by the way) A USB webcam I’m currently using all of my Raspberry Pi camera modules for my upcoming book, Raspberry Pi for Computer Vision so I decided to use my Logitech C920 which is plug-and-play compatible with the Nano (you could use the newer Logitech C960 as well). The examples included with the Jetson Nano Inference library can be found in jetson-inference: detectnet-camera: Performs object detection using a camera as an input. detectnet-console: Also performs object detection, but using an input image rather than a camera.
https://pyimagesearch.com/2019/05/06/getting-started-with-the-nvidia-jetson-nano/
imagenet-camera: Performs image classification using a camera. imagenet-console: Classifies an input image using a network pre-trained on the ImageNet dataset. segnet-camera: Performs semantic segmentation from an input camera. segnet-console: Also performs semantic segmentation, but on an image. A few other examples are included as well, including deep homography estimation and super resolution. However, in order to run these examples, we need to slightly modify the source code for the respective cameras. In each example you’ll see that the DEFAULT_CAMERA value is set to -1, implying that an attached CSI camera should be used. However, since we are using a USB camera, we need to change the DEFAULT_CAMERA value from -1 to 0 (or whatever the correct /dev/video V4L2 camera is). Luckily, this change is super easy to do! Let’s start with image classification as an example.
https://pyimagesearch.com/2019/05/06/getting-started-with-the-nvidia-jetson-nano/
First, change directory into ~/jetson-inference/imagenet-camera: $ cd ~/jetson-inference/imagenet-camera From there, open up imagenet-camera.cpp: $ nano imagenet-camera.cpp You’ll then want to scroll down to approximately Line 37 where you’ll see the DEFAULT_CAMERA value: #define DEFAULT_CAMERA -1 // -1 for onboard camera, or change to index of /dev/video V4L2 camera (>=0) Simply change that value from -1 to 0: #define DEFAULT_CAMERA 0 // -1 for onboard camera, or change to index of /dev/video V4L2 camera (>=0) From there, save and exit the editor. After editing the C++ file you will need to recompile the example which is as simple as: $ cd ../build $ make $ sudo make install Keep in mind that make is smart enough to not recompile the entire library. It will only recompile files that have changed (in this case, the ImageNet classification example). Once compiled, change to the aarch64/bin directory and execute the imagenet-camera binary: $ cd aarch64/bin/ $ ./imagenet-camera imagenet-camera args (1): 0 [./imagenet-camera] [gstreamer] initialized gstreamer, version 1.14.1.0 [gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVCAMERA [gstreamer] gstCamera pipeline string: v4l2src device=/dev/video0 ! video/x-raw, width=(int)1280, height=(int)720, format=YUY2 ! videoconvert ! video/x-raw, format=RGB ! videoconvert ! appsink name=mysink [gstreamer] gstCamera successfully initialized with GST_SOURCE_V4L2 imagenet-camera: successfully initialized video device width: 1280 height: 720 depth: 24 (bpp) imageNet -- loading classification network model from: -- prototxt networks/googlenet.prototxt -- model networks/bvlc_googlenet.caffemodel -- class_labels networks/ilsvrc12_synset_words.txt -- input_blob 'data' -- output_blob 'prob' -- batch_size 2 [TRT] TensorRT version 5.0.6 [TRT] detected model format - caffe (extension '.caffemodel') [TRT] desired precision specified for GPU: FASTEST [TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8 [TRT] native precisions detected for GPU: FP32, FP16 [TRT] selecting fastest native precision for GPU: FP16 [TRT] attempting to open engine cache file networks/bvlc_googlenet.caffemodel.2.1.GPU.FP16.engine [TRT] loading network profile from engine cache... networks/bvlc_googlenet.caffemodel.2.1.GPU.FP16.engine [TRT] device GPU, networks/bvlc_googlenet.caffemodel loaded Here you can see that the GoogLeNet is loaded into memory, after which inference starts: Image classification is running at ~10 FPS on the Jetson Nano at 1280×720. IMPORTANT: If this is the first time you are loading a particular model then it could take 5-15 minutes to load the model.
https://pyimagesearch.com/2019/05/06/getting-started-with-the-nvidia-jetson-nano/
Internally, the Jetson Nano Inference library is optimizing and preparing the model for inference. This only has to be done once so subsequent runs of the program will be significantly faster (in terms of model loading time, not inference). Now that we’ve tried image classification, let’s look at the object detection example on the Jetson Nano which is located in ~/jetson-inference/detectnet-camera/detectnet-camera.cpp. Again, if you are using a USB webcam you’ll want to edit approximately Line 39 of detectnet-camera.cpp and change DEFAULT_CAMERA from -1 to 0 and then recompile via make (again, only necessary if you are using a USB webcam). After compiling you can find the detectnet-camera binary in ~/jetson-inference/build/aarch64/bin. Let’s go ahead and run the object detection demo on the Jetson Nano now: $ ./detectnet-camera detectnet-camera args (1): 0 [./detectnet-camera] [gstreamer] initialized gstreamer, version 1.14.1.0 [gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVCAMERA [gstreamer] gstCamera pipeline string: v4l2src device=/dev/video0 ! video/x-raw, width=(int)1280, height=(int)720, format=YUY2 ! videoconvert ! video/x-raw, format=RGB ! videoconvert !