|
| 1 | +# Using a Mask Model on OCI with YOLOv5: Custom PyTorch From Scratch |
| 2 | + |
| 3 | +## Introduction |
| 4 | + |
| 5 | +In this article, we're going to learn how to load a YOLOv5 model into PyTorch, and then augment the detections with three different techniques: |
| 6 | +1. Cropping and saving detections |
| 7 | +2. Counting Detected Objects |
| 8 | +3. Sorting Detections |
| 9 | + |
| 10 | +If you're a little confused on how we did this from scratch, you can check out the first and second article (this article's predecessors) here: |
| 11 | + |
| 12 | +- [Creating a CMask Detection Model on OCI with YOLOv5: Data Labeling with RoboFlow](https://medium.com/oracledevs/creating-a-cmask-detection-model-on-oci-with-yolov5-data-labeling-with-roboflow-5cff89cf9b0b) |
| 13 | +- [Creating a Mask Model on OCI with YOLOv5: Training and Real-Time Inference](https://medium.com/oracledevs/creating-a-mask-model-on-oci-with-yolov5-training-and-real-time-inference-3534c7f9eb21) |
| 14 | + |
| 15 | +Check them out if you're especially interested, as these previous articles will hugely clarify what we do and how we obtained the data and the mask detection model. |
| 16 | + |
| 17 | +## Why is this necessary |
| 18 | +Build Custom PyTorch Code |
| 19 | + |
| 20 | + https://obsproject.com/forum/resources/obs-virtualcam.949/ |
| 21 | + |
| 22 | + https://github.com/obsproject/obs-studio/issues/3635 |
| 23 | + |
| 24 | + https://stackoverflow.com/questions/50916903/how-to-process-vlc-udp-stream-over-opencv/50917584#50917584 |
| 25 | + |
| 26 | + https://obsproject.com/forum/threads/how-to-get-virtual-camera-data-using-opencv%EF%BC%9F.137113/ |
| 27 | + |
| 28 | + https://stackoverflow.com/questions/64391455/how-do-i-input-obs-virtual-cam-in-my-python-code-using-opencv |
| 29 | + |
| 30 | + https://www.reddit.com/r/computervision/comments/k6gxgo/use_obs_as_the_source_for_opencv/ |
| 31 | + |
| 32 | + https://github.com/opencv/opencv/issues/19746 |
| 33 | + |
| 34 | + https://obsproject.com/forum/resources/obs-virtualcam.949/ |
| 35 | + |
| 36 | + |
| 37 | +3 implementations in one: |
| 38 | +## Cropping & Saving Detections |
| 39 | + |
| 40 | +## Counting Detected Objects |
| 41 | + |
| 42 | +## Sorting Detections |
| 43 | + |
| 44 | + |
| 45 | + |
| 46 | +## Conclusions |
| 47 | + |
| 48 | +Include model weights in github and reference this for people to use the already-trained model |
| 49 | + |
| 50 | + |
| 51 | + |
| 52 | + |
| 53 | + |
| 54 | + |
| 55 | + |
| 56 | +## Introduction |
| 57 | + |
| 58 | +If you remember [an article I wrote a few weeks back](https://medium.com/oracledevs/creating-a-cmask-detection-model-on-oci-with-yolov5-data-labeling-with-roboflow-5cff89cf9b0b), I created a Computer Vision model able to recognize whether someone was wearing their COVID-19 mask correctly, incorrectly, or simply didn't wear any mask. |
| 59 | + |
| 60 | +Now, as a natural continuation of this topic, I'll show you how you can train the model using Oracle Cloud Infrastructure (OCI). This applies to any object detection model created using the YOLO (You Only Look Once) standard and format. |
| 61 | + |
| 62 | +At the end of the article, you'll see the end result of performing inference (on myself): |
| 63 | + |
| 64 | + |
| 65 | + |
| 66 | +## Hardware? |
| 67 | + |
| 68 | +To get started, I went into my Oracle Cloud Infrastructure account and created a Compute instance. These are the specifications for the project: |
| 69 | + |
| 70 | +* Shape: VM.GPU3.2 |
| 71 | +* GPU: 2 NVIDIA® Tesla® V100 GPUs ready for action. |
| 72 | +* GPU Memory: 32GB |
| 73 | +* CPU: 12 cores |
| 74 | +* CPU Memory: 180GB |
| 75 | + |
| 76 | +I specifically chose an OCI Custom Image (AI 'all-in-one' Data Science Image for GPU) as the default Operating System for my machine. The partner image that I chose is the following: |
| 77 | + |
| 78 | + |
| 79 | + |
| 80 | +> **Note**: this custom image is very useful and often saves me a lot of time. It already has 99% of the things that I need to work on in any Data Science-related project. So, no installation/setup wasted time before getting to work. (It includes things like conda, CUDA, PyTorch, a Jupyter environment, VSCode, PyCharm, git, Docker, the OCI CLI... and much more. Make sure to read the [full custom image specs here](https://cloud.oracle.com/marketplace/application/74084544/usageInformation)). |
| 81 | +
|
| 82 | +### Price Comparison |
| 83 | + |
| 84 | +The hardware that we're going to work with is [very expensive](https://www.amazon.com/PNY-TCSV100MPCIE-PB-Nvidia-Tesla-v100/dp/B076P84525), which nobody is expected to have access to in their homes. Nobody that I know has a $15,000 graphics card (if you know someone let me know), and this is where Cloud can really help us. OCI gives us access to these amazing machines for a fraction of the cost that you would find from a competitor. |
| 85 | + |
| 86 | +For example, I rented both NVIDIA V100s *just for **$2.50/hr***, and I'll be using these GPUs to train my models. |
| 87 | + |
| 88 | +> **Note**: Be mindful of the resources you use in OCI. Just like other Cloud providers, once you allocate a GPU in your Cloud account, you will still be charged for the use even if it's idle. So, remember to terminate your GPU instances when you're finished to avoid overcharges! |
| 89 | +
|
| 90 | +[Here's a link](https://www.oracle.com/cloud/price-list/) to the full OCI price list if you're curious. |
| 91 | + |
| 92 | +## Training the Model with YOLOv5 |
| 93 | + |
| 94 | +Now I have my compute instance ready. And since I have almost no configuration overheads (I'm using the custom image), I can get straight to business. |
| 95 | + |
| 96 | +Before getting ready to train the model, I have to clone YOLOv5's repository: |
| 97 | + |
| 98 | +```console |
| 99 | +git clone https://github.com/ultralytics/yolov5.git |
| 100 | +``` |
| 101 | + |
| 102 | +And finally, install all dependencies into my Python environment: |
| 103 | + |
| 104 | +```console |
| 105 | +cd /home/$USER/yolov5 |
| 106 | +pip install -r /home/$USER/yolov5/requirements.txt |
| 107 | +``` |
| 108 | + |
| 109 | +> **Note**: YOLOv8 was just released. I thought, "why not change directly to YOLOv8, since it's basically an improved version of YOLOv5?" But I didn't want to overcomplicate things -- for future content, I'll switch to YOLOv8 and show you why it's better than the version we are using for this article! |
| 110 | +
|
| 111 | + |
| 112 | +### Downloading my Dataset |
| 113 | + |
| 114 | +[The model](https://universe.roboflow.com/jasperan/public-mask-placement) is public and freely available for anyone who wants to use it: |
| 115 | + |
| 116 | + |
| 117 | + |
| 118 | + |
| 119 | +> **Note**: thanks to RoboFlow and their team, you can even [test the model in your browser](https://universe.roboflow.com/jasperan/public-mask-placement/model/4) (uploading your images/videos) or with your webcam! |
| 120 | +
|
| 121 | +I exported my model from RoboFlow in the YOLOv5 format. This downloaded my dataset into a ZIP file, including three different directories: training, testing, and validation, each with their corresponding image sets. |
| 122 | + |
| 123 | +I pushed the dataset into my compute instance using FTP (File Transfer Protocol) and unzipped it: |
| 124 | + |
| 125 | + |
| 126 | + |
| 127 | + |
| 128 | + |
| 129 | +Additionally, we have the `data.yaml` file containing the dataset's metadata. |
| 130 | + |
| 131 | +To avoid absolute/relative path issues with my dataset, I also want to modify `data.yaml` and insert the *absolute* paths where all images (from training, validation, and testing sets) are found since by default they contain the relative path: |
| 132 | + |
| 133 | + |
| 134 | + |
| 135 | +Now, we're almost ready for training. |
| 136 | + |
| 137 | +### Training Parameters |
| 138 | + |
| 139 | +We're ready to make a couple of extra decisions regarding which parameters we'll use during training. |
| 140 | + |
| 141 | +It's important to choose the right parameters, as doing otherwise can cause terrible models to be created (the word *terrible* is intentional). So, let me explain what's important about training parameters. Official documentation can be found [here](https://docs.ultralytics.com/config/). |
| 142 | + |
| 143 | +* `--device`: specifies which CUDA device (or by default, CPU) we want to use. Since I have two GPUs, I'll want to use both for training. I'll set this to "0,1", which will perform **distributed training**, although not in the most optimal way. (I'll make an article in the future on how to properly do Distributed Data Parallel with PyTorch). |
| 144 | +* `--epochs`: the total number of epochs we want to train the model for. If the model doesn't find an improvement during training. I set this to 3000 epochs, although my model converged very precisely long before the 3000th epoch was done. |
| 145 | + |
| 146 | + > **Note**: YOLOv5 (and lots of Neural Networks) implement a function called **early stopping**, which will stop training before the specified number of epochs, if it can't find a way to improve the mAPs (Mean Average Precision) for any class. |
| 147 | +
|
| 148 | +* `--batch`: the batch size. I set this to either 16 images per batch, or 32. Setting a lower value (and considering that my dataset already has 10,000 images) is usually a *bad practice* and can cause instability. |
| 149 | +* `--lr`: I set the learning rate to 0.01 by default. |
| 150 | +* `--img` (image size): this parameter was probably the one that gave me the most trouble. I initially thought that all images -- if trained with a specific image size -- must always follow this size; however, you don't need to worry about this due to image subsampling and other techniques that are implemented to avoid this issue. This value needs to be the maximum value between the height and width of the pictures, averaged across the dataset. |
| 151 | +* `--save_period`: specifies how often the model should save a copy of the state. For example, if I set this to 25, it will create a YOLOv5 checkpoint that I can use every 25 trained epochs. |
| 152 | + |
| 153 | +> **Note**: if I have 1,000 images with an average width of 1920 and height of 1080, I'll probably create a model of image size = 640, and subsample my images. If I have issues with detections, perhaps I'll create a model with a higher image size value, but training time will ramp up, and inference will also require more computing power. |
| 154 | +
|
| 155 | + |
| 156 | +### Which YOLOv5 checkpoint to choose from? |
| 157 | + |
| 158 | +The second and last decision we need to make is which YOLOv5 checkpoint we're going to start from. It's **highly recommended** that you start training from one of the five possible checkpoints: |
| 159 | + |
| 160 | + |
| 161 | + |
| 162 | +> **Note**: you can also start training 100% from scratch, but you should only do this if what you're trying to detect has never been reproduced before, e.g. astrophotography. The upside of using a checkpoint is that YOLOv5 has already been trained up to a point, with real-world data. So, anything that resembles the real world can easily be trained from a checkpoint, which will help you reduce training time (and therefore expense). |
| 163 | +
|
| 164 | +The higher the accuracy from each checkpoint, the more parameters it contains. Here's a detailed comparison with all available pre-trained checkpoints: |
| 165 | + |
| 166 | +| Model | size<br><sup>(pixels)</sup> | Mean Average Precision<sup>val<br>50-95</sup> | Mean Average Precision<sup>val<br>50</sup> | Speed<br><sup>CPU b1<br>(ms)</sup> | Speed<br><sup>V100 b1<br>(ms)</sup> | Speed<br><sup>V100 b32<br>(ms)</sup> | Number of parameters<br><sup>(M)</sup> | FLOPs<br><sup>@640 (B)</sup> | |
| 167 | +| ----- | ------------ | ------------------------------ | --------------------------- | --------------- | ---------------- | ----------------- | ----------------------- | ------------- | |
| 168 | +| [YOLOv5n](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5n.pt) | 640 | 28.0 | 45.7 | **45** | **6.3** | **0.6** | **1.9** | **4.5** | |
| 169 | +| [YOLOv5s](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5s.pt) | 640 | 37.4 | 56.8 | 98 | 6.4 | 0.9 | 7.2 | 16.5 | |
| 170 | +| [YOLOv5m](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5m.pt) | 640 | 45.4 | 64.1 | 224 | 8.2 | 1.7 | 21.2 | 49.0 | |
| 171 | +| [YOLOv5l](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5l.pt) | 640 | 49.0 | 67.3 | 430 | 10.1 | 2.7 | 46.5 | 109.1 | |
| 172 | +| [YOLOv5x](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5x.pt) | 640 | 50.7 | 68.9 | 766 | 12.1 | 4.8 | 86.7 | 205.7 | |
| 173 | +| | | | | | | | | | |
| 174 | +| [YOLOv5n6](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5n6.pt) | 1280 | 36.0 | 54.4 | 153 | 8.1 | 2.1 | 3.2 | 4.6 | |
| 175 | +| [YOLOv5s6](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5s6.pt) | 1280 | 44.8 | 63.7 | 385 | 8.2 | 3.6 | 12.6 | 16.8 | |
| 176 | +| [YOLOv5m6](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5m6.pt) | 1280 | 51.3 | 69.3 | 887 | 11.1 | 6.8 | 35.7 | 50.0 | |
| 177 | +| [YOLOv5l6](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5l6.pt) | 1280 | 53.7 | 71.3 | 1784 | 15.8 | 10.5 | 76.8 | 111.4 | |
| 178 | +| [YOLOv5x6](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5x6.pt)<br>+[TTA](https://github.com/ultralytics/yolov5/issues/303) | 1280<br>1536 | 55.0<br>**55.8** | 72.7<br>**72.7** | 3136<br>- | 26.2<br>- | 19.4<br>- | 140.7<br>- | 209.8<br>- | |
| 179 | + |
| 180 | +> **Note**: all checkpoints have been trained for 300 epochs with the default settings (find all of them [in the official docs](https://docs.ultralytics.com/config/)). The nano and small version use [these hyperparameters](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-low.yaml), all others use [these](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-high.yaml). |
| 181 | +
|
| 182 | +Also note that -- if we want to create a model with an image size > 640 -- we should select the corresponding YOLOv5 checkpoints (those that end in the number `6`). |
| 183 | + |
| 184 | +So, for this model, since I will use 640 pixels, I'll just create a first version using **YOLOv5s**, and another one with **YOLOv5x**. You only need to train one, but I was curious and wanted to see how each model differs in the end when applying it to the same video. |
| 185 | + |
| 186 | +### Training |
| 187 | + |
| 188 | +Now, we just need to run the following commands... |
| 189 | + |
| 190 | +```console |
| 191 | + # for yolov5s |
| 192 | +python train.py --img 640 --data ./datasets/y5_mask_model_v1/data.yaml --weights yolov5s.pt --name y5_mask_detection --save-period 25 --device 0,1 --batch 16 --epochs 3000 |
| 193 | + |
| 194 | +# for yolov5x |
| 195 | +python train.py --img 640 --data ./datasets/y5_mask_model_v1/data.yaml --weights yolov5x.pt --name y5_mask_detection --save-period 25 --device 0,1 --batch 16 --epochs 3000 |
| 196 | +``` |
| 197 | + |
| 198 | +...and the model will start training. Depending on the size of the dataset, each epoch will take more or less time. In my case, with 10.000 images, each epoch took about 2 minutes to train and 20 seconds to validate. |
| 199 | + |
| 200 | + |
| 201 | + |
| 202 | +For each epoch, we will have broken-down information about epoch training time and mAP for the model, so we can see how our model progresses over time. |
| 203 | + |
| 204 | +After the training is done, we can have a look at the results. The visualizations are provided automatically, and they are pretty similar to what I found using RoboFlow Train in the last article. I looked at the most promising graphs: |
| 205 | + |
| 206 | + |
| 207 | + |
| 208 | +> **Note**: this means that both the `incorrect` and `no mask` classes are underrepresented if we compare them to the `mask` class. An idea for the future is to increase the number of examples for both these classes. |
| 209 | +
|
| 210 | +The confusion matrix tells us how many predictions from images in the validation set were correct, and how many weren't: |
| 211 | + |
| 212 | + |
| 213 | + |
| 214 | + |
| 215 | +As I've previously specified my model to auto-save every 25 epochs, the resulting directory is about 1GB. I only care about the best-performing model out of all the checkpoints, so I keep _`best.pt`_ (the model with the highest mAP of all checkpoints) and delete all others. |
| 216 | + |
| 217 | +The model took **168** epochs to finish (early stopping happened, so it found the best model at the 68th epoch), with an average of **2 minutes and 34 seconds** per epoch. |
| 218 | + |
| 219 | + |
| 220 | + |
| 221 | + |
| 222 | +## YOLOv5 Inference |
| 223 | + |
| 224 | +Now that we have the model, it's time to use it. In this article, we're only going to cover how to use the model via the YOLOv5 interface; I will prepare a custom PyTorch inference detector for the next article. |
| 225 | + |
| 226 | +To run inference on our already-generated model, we save the path of the `best.pt` PyTorch model and execute: |
| 227 | + |
| 228 | +```console |
| 229 | +# for a youtube video |
| 230 | +python detect.py --weights="H:/Downloads/trained_mask_model/weights/best.pt" --source="<YT_URL>" --line-thickness 1 --hide-conf --data=data.yaml" |
| 231 | + |
| 232 | +# for a local video |
| 233 | +python detect.py --weights="H:/Downloads/trained_mask_model/weights/best.pt" --source="example_video.mp4" --line-thickness 1 --hide-conf --data=data.yaml" |
| 234 | +``` |
| 235 | + |
| 236 | +> **Note**: it's important to specify the data.yaml file (containing the dataset's metadata) and the pre-trained weights we have obtained from our model training. Also, you can change the default line width provided by YOLO using the --line-thickness option. |
| 237 | +
|
| 238 | + |
| 239 | +The source of the model can be any of the following: |
| 240 | + |
| 241 | +- A YouTube video |
| 242 | +- Local MP4 / MKV file |
| 243 | +- Directory containing individual images |
| 244 | +- Screen input (takes screenshots of what you're seeing) |
| 245 | +- HTTP or Twitch streams (RTMP, RTSP) |
| 246 | +- Webcam |
| 247 | + |
| 248 | +## Results! |
| 249 | + |
| 250 | +I prepared this YouTube video to check the difference in detection (against the same video) from the two models I've trained: |
| 251 | + |
| 252 | +[](https://www.youtube.com/watch?v=LPRrbPiZ2X8) |
| 253 | + |
| 254 | +## Conclusions |
| 255 | + |
| 256 | +The accuracy of both models is pretty good, and I'm happy with the results. The model performs a bit worse when you give it media where there are several people in the video/image, but still performs well. |
| 257 | + |
| 258 | +In the next article, I'll create a custom PyTorch inference detector (and explain the code) which will let us personalize everything we see -- something that the standard YOLO framework doesn't give us -- and also explain how to get started with distributed model training. |
| 259 | + |
| 260 | +If you'd like to see any special use cases or features implemented in the future, let me know in the comments! |
| 261 | + |
| 262 | + |
| 263 | +If you’re curious about the goings-on of Oracle Developers in their natural habitat like me, come join us [on our public Slack channel!](https://bit.ly/odevrel_slack) We don’t mind being your fish bowl 🐠. |
| 264 | + |
| 265 | +Stay tuned... |
| 266 | + |
| 267 | +## Acknowledgments |
| 268 | + |
| 269 | +* **Author** - [Nacho Martinez](https://www.linkedin.com/in/ignacio-g-martinez/), Data Science Advocate @ Oracle Developer Relations |
| 270 | +* **Last Updated By/Date** - January 23rd, 2023 |
0 commit comments