{"id":2179,"date":"2019-05-23T19:00:09","date_gmt":"2019-05-23T19:00:09","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2019\/05\/23\/how-to-perform-object-detection-in-photographs-using-mask-r-cnn-with-keras\/"},"modified":"2019-05-23T19:00:09","modified_gmt":"2019-05-23T19:00:09","slug":"how-to-perform-object-detection-in-photographs-using-mask-r-cnn-with-keras","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2019\/05\/23\/how-to-perform-object-detection-in-photographs-using-mask-r-cnn-with-keras\/","title":{"rendered":"How to Perform Object Detection in Photographs Using Mask R-CNN with Keras"},"content":{"rendered":"<p>Author: Jason Brownlee<\/p>\n<div>\n<p>Object detection is a task in computer vision that involves identifying the presence, location, and type of one or more objects in a given photograph.<\/p>\n<p>It is a challenging problem that involves building upon methods for object recognition (e.g. where are they), object localization (e.g. what are their extent), and object classification (e.g. what are they).<\/p>\n<p>In recent years, deep learning techniques have achieved state-of-the-art results for object detection, such as on standard benchmark datasets and in computer vision competitions. Most notably is the R-CNN, or Region-Based Convolutional Neural Networks, and the most recent technique called Mask R-CNN that is capable of achieving state-of-the-art results on a range of object detection tasks.<\/p>\n<p>In this tutorial, you will discover how to use the Mask R-CNN model to detect objects in new photographs.<\/p>\n<p>After completing this tutorial, you will know:<\/p>\n<ul>\n<li>The region-based Convolutional Neural Network family of models for object detection and the most recent variation called Mask R-CNN.<\/li>\n<li>The best-of-breed open source library implementation of the Mask R-CNN for the Keras deep learning library.<\/li>\n<li>How to use a pre-trained Mask R-CNN to perform object localization and detection on new photographs.<\/li>\n<\/ul>\n<p>Let\u2019s get started.<\/p>\n<div id=\"attachment_7696\" style=\"width: 650px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-7696\" class=\"size-full wp-image-7696\" src=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2019\/05\/How-to-Perform-Object-Detection-in-Photographs-With-Mask-R-CNN-in-Keras.jpg\" alt=\"How to Perform Object Detection in Photographs With Mask R-CNN in Keras\" width=\"640\" height=\"480\" srcset=\"http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2019\/05\/How-to-Perform-Object-Detection-in-Photographs-With-Mask-R-CNN-in-Keras.jpg 640w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2019\/05\/How-to-Perform-Object-Detection-in-Photographs-With-Mask-R-CNN-in-Keras-300x225.jpg 300w\" sizes=\"(max-width: 640px) 100vw, 640px\"><\/p>\n<p id=\"caption-attachment-7696\" class=\"wp-caption-text\">How to Perform Object Detection in Photographs With Mask R-CNN in Keras<br \/>Photo by <a href=\"https:\/\/www.flickr.com\/photos\/khianti\/3414236401\/\">Ole Husby<\/a>, some rights reserved.<\/p>\n<\/div>\n<h2>Tutorial Overview<\/h2>\n<p>This tutorial is divided into three parts; they are:<\/p>\n<ol>\n<li>R-CNN and Mask R-CNN<\/li>\n<li>Matterport Mask R-CNN Project<\/li>\n<li>Object Detection with Mask R-CNN<\/li>\n<\/ol>\n<div class=\"woo-sc-hr\"><\/div>\n<p><center><\/p>\n<h3>Want Results with Deep Learning for Computer Vision?<\/h3>\n<p>Take my free 7-day email crash course now (with sample code).<\/p>\n<p>Click to sign-up and also get a free PDF Ebook version of the course.<\/p>\n<p><a href=\"https:\/\/machinelearningmastery.lpages.co\/leadbox\/1458ca1e0972a2%3A164f8be4f346dc\/4715926590455808\/\" target=\"_blank\" style=\"background: rgb(255, 206, 10); color: rgb(255, 255, 255); text-decoration: none; font-family: Helvetica, Arial, sans-serif; font-weight: bold; font-size: 16px; line-height: 20px; padding: 10px; display: inline-block; max-width: 300px; border-radius: 5px; text-shadow: rgba(0, 0, 0, 0.25) 0px -1px 1px; box-shadow: rgba(255, 255, 255, 0.5) 0px 1px 3px inset, rgba(0, 0, 0, 0.5) 0px 1px 3px;\" rel=\"noopener noreferrer\">Download Your FREE Mini-Course<\/a><script data-leadbox=\"1458ca1e0972a2:164f8be4f346dc\" data-url=\"https:\/\/machinelearningmastery.lpages.co\/leadbox\/1458ca1e0972a2%3A164f8be4f346dc\/4715926590455808\/\" data-config=\"%7B%7D\" type=\"text\/javascript\" src=\"https:\/\/machinelearningmastery.lpages.co\/leadbox-1553357564.js\"><\/script><\/p>\n<p><\/center><\/p>\n<div class=\"woo-sc-hr\"><\/div>\n<h2>Mask R-CNN for Object Detection<\/h2>\n<p>Object detection is a computer vision task that involves both localizing one or more objects within an image and classifying each object in the image.<\/p>\n<p>It is a challenging computer vision task that requires both successful object localization in order to locate and draw a bounding box around each object in an image, and object classification to predict the correct class of object that was localized.<\/p>\n<p>An extension of object detection involves marking the specific pixels in the image that belong to each detected object instead of using coarse bounding boxes during object localization. This harder version of the problem is generally referred to as object segmentation or semantic segmentation.<\/p>\n<p>The Region-Based Convolutional Neural Network, or R-CNN, is a family of convolutional neural network models designed for object detection, developed by <a href=\"http:\/\/www.rossgirshick.info\/\">Ross Girshick<\/a>, et al.<\/p>\n<p>There are perhaps four main variations of the approach, resulting in the current pinnacle called Mask R-CNN. The salient aspects of each variation can be summarized as follows:<\/p>\n<ul>\n<li><strong>R-CNN<\/strong>: Bounding boxes are proposed by the \u201c<em>selective search<\/em>\u201d algorithm, each of which is stretched and features are extracted via a deep convolutional neural network, such as <a href=\"https:\/\/en.wikipedia.org\/wiki\/AlexNet\">AlexNet<\/a>, before a final set of object classifications are made with linear SVMs.<\/li>\n<li><strong>Fast R-CNN<\/strong>: Simplified design with a single model, bounding boxes are still specified as input, but a region-of-interest pooling layer is used after the deep CNN to consolidate regions and the model predicts both class labels and regions of interest directly.<\/li>\n<li><strong>Faster R-CNN<\/strong>: Addition of a Region Proposal Network that interprets features extracted from the deep CNN and learns to propose regions-of-interest directly.<\/li>\n<li><strong>Mask R-CNN<\/strong>: Extension of Faster R-CNN that adds an output model for predicting a mask for each detected object.<\/li>\n<\/ul>\n<p>The Mask R-CNN model introduced in the 2018 paper titled \u201c<a href=\"https:\/\/arxiv.org\/abs\/1703.06870\">Mask R-CNN<\/a>\u201d is the most recent variation of the family models and supports both object detection and object segmentation. The paper provides a nice summary of the model linage to that point:<\/p>\n<blockquote>\n<p>The Region-based CNN (R-CNN) approach to bounding-box object detection is to attend to a manageable number of candidate object regions and evaluate convolutional networks independently on each RoI. R-CNN was extended to allow attending to RoIs on feature maps using RoIPool, leading to fast speed and better accuracy. Faster R-CNN advanced this stream by learning the attention mechanism with a Region Proposal Network (RPN). Faster R-CNN is flexible and robust to many follow-up improvements, and is the current leading framework in several benchmarks.<\/p>\n<\/blockquote>\n<p>\u2014 <a href=\"https:\/\/arxiv.org\/abs\/1703.06870\">Mask R-CNN<\/a>, 2018.<\/p>\n<p>The family of methods may be among the most effective for object detection, achieving then state-of-the-art results on computer vision benchmark datasets. Although accurate, the models can be slow when making a prediction as compared to alternate models such as YOLO that may be less accurate but are designed for real-time prediction.<\/p>\n<h2>Matterport Mask R-CNN Project<\/h2>\n<p>Mask R-CNN is a sophisticated model to implement, especially as compared to a simple or even state-of-the-art deep convolutional neural network model.<\/p>\n<p>Source code is available for each version of the R-CNN model, provided in separate GitHub repositories with prototype models based on the <a href=\"http:\/\/caffe.berkeleyvision.org\/\">Caffe deep learning framework<\/a>. For example:<\/p>\n<ul>\n<li><a href=\"https:\/\/github.com\/rbgirshick\/rcnn\">R-CNN: Regions with Convolutional Neural Network Features, GitHub<\/a>.<\/li>\n<li><a href=\"https:\/\/github.com\/rbgirshick\/fast-rcnn\">Fast R-CNN, GitHub<\/a>.<\/li>\n<li><a href=\"https:\/\/github.com\/rbgirshick\/py-faster-rcnn\">Faster R-CNN Python Code, GitHub<\/a>.<\/li>\n<li><a href=\"https:\/\/github.com\/facebookresearch\/Detectron\">Detectron, Facebook AI, GitHub<\/a>.<\/li>\n<\/ul>\n<p>Instead of developing an implementation of the R-CNN or Mask R-CNN model from scratch, we can use a reliable third-party implementation built on top of the Keras deep learning framework.<\/p>\n<p>The best of breed third-party implementations of Mask R-CNN is the <a href=\"https:\/\/github.com\/matterport\/Mask_RCNN\">Mask R-CNN Project<\/a> developed by <a href=\"https:\/\/matterport.com\/\">Matterport<\/a>. The project is open source released under a permissive license (i.e. MIT license) and the code has been widely used on a variety of projects and Kaggle competitions.<\/p>\n<p>Nevertheless, it is an open source project, subject to the whims of the project developers. As such, <a href=\"https:\/\/github.com\/jbrownlee\/Mask_RCNN\">I have a fork of the project available<\/a>, just in case there are major changes to the API in the future.<\/p>\n<p>The project is light on API documentation, although it does provide a number of examples in the form of Python Notebooks that you can use to understand how to use the library by example. Two notebooks that may be helpful to review are:<\/p>\n<ul>\n<li><a href=\"https:\/\/github.com\/matterport\/Mask_RCNN\/blob\/master\/samples\/demo.ipynb\">Mask R-CNN Demo, Notebook<\/a>.<\/li>\n<li><a href=\"https:\/\/github.com\/matterport\/Mask_RCNN\/blob\/master\/samples\/coco\/inspect_model.ipynb\">Mask R-CNN \u2013 Inspect Trained Model, Notebook<\/a>.<\/li>\n<\/ul>\n<p>There are perhaps three main use cases for using the Mask R-CNN model with the Matterport library; they are:<\/p>\n<ul>\n<li><strong>Object Detection Application<\/strong>: Use a pre-trained model for object detection on new images.<\/li>\n<li><strong>New Model via Transfer Learning<\/strong>: Use a pre-trained model as a starting point in developing a model for a new object detection dataset.<\/li>\n<li><strong>New Model from Scratch<\/strong>: Develop a new model from scratch for an object detection dataset.<\/li>\n<\/ul>\n<p>In order to get familiar with the model and the library, we will look at the first example in the next section.<\/p>\n<h2>Object Detection With Mask R-CNN<\/h2>\n<p>In this section, we will use the Matterport Mask R-CNN library to perform object detection on arbitrary photographs.<\/p>\n<p>Much like using a pre-trained deep CNN for image classification, e.g. such as <a href=\"https:\/\/machinelearningmastery.com\/use-pre-trained-vgg-model-classify-objects-photographs\/\">VGG-16 trained on an ImageNet dataset<\/a>, we can use a pre-trained Mask R-CNN model to detect objects in new photographs. In this case, we will use a Mask R-CNN trained on the <a href=\"http:\/\/cocodataset.org\/\">MS COCO object detection problem<\/a>.<\/p>\n<h3>Mask R-CNN Installation<\/h3>\n<p>The first step is to install the library.<\/p>\n<p>At the time of writing, there is no distributed version of the library, so we have to install it manually. The good news is that this is very easy.<\/p>\n<p>Installation involves cloning the GitHub repository and running the installation script on your workstation. If you are having trouble, see the <a href=\"https:\/\/github.com\/matterport\/Mask_RCNN#installation\">installation instructions<\/a> buried in the library\u2019s readme file.<\/p>\n<h4>Step 1. Clone the Mask R-CNN GitHub Repository<\/h4>\n<p>This is as simple as running the following command from your command line:<\/p>\n<pre class=\"crayon-plain-tag\">git clone https:\/\/github.com\/matterport\/Mask_RCNN.git<\/pre>\n<p>This will create a new local directory with the name <em>Mask_RCNN<\/em> that looks as follows:<\/p>\n<pre class=\"crayon-plain-tag\">Mask_RCNN\r\n\u251c\u2500\u2500 assets\r\n\u251c\u2500\u2500 build\r\n\u2502   \u251c\u2500\u2500 bdist.macosx-10.13-x86_64\r\n\u2502   \u2514\u2500\u2500 lib\r\n\u2502       \u2514\u2500\u2500 mrcnn\r\n\u251c\u2500\u2500 dist\r\n\u251c\u2500\u2500 images\r\n\u251c\u2500\u2500 mask_rcnn.egg-info\r\n\u251c\u2500\u2500 mrcnn\r\n\u2514\u2500\u2500 samples\r\n    \u251c\u2500\u2500 balloon\r\n    \u251c\u2500\u2500 coco\r\n    \u251c\u2500\u2500 nucleus\r\n    \u2514\u2500\u2500 shapes<\/pre>\n<\/p>\n<h4>Step 2. Install the Mask R-CNN Library<\/h4>\n<p>The library can be installed directly via pip.<\/p>\n<p>Change directory into the <em>Mask_RCNN<\/em> directory and run the installation script.<\/p>\n<p>From the command line, type the following:<\/p>\n<pre class=\"crayon-plain-tag\">cd Mask_RCNN\r\npython setup.py install<\/pre>\n<p>On Linux or MacOS you may need to install the software with sudo permissions; for example, you may see an error such as:<\/p>\n<pre class=\"crayon-plain-tag\">error: can't create or remove files in install directory<\/pre>\n<p>In that case, install the software with sudo:<\/p>\n<pre class=\"crayon-plain-tag\">sudo python setup.py install<\/pre>\n<p>The library will then install directly and you will see a lot of successful installation messages ending with the following:<\/p>\n<pre class=\"crayon-plain-tag\">...\r\nFinished processing dependencies for mask-rcnn==2.1<\/pre>\n<p>This confirms that you installed the library successfully and that you have the latest version, which at the time of writing is version 2.1.<\/p>\n<h4>Step 3: Confirm the Library Was Installed<\/h4>\n<p>It is always a good idea to confirm that the library was installed correctly.<\/p>\n<p>You can confirm that the library was installed correctly by querying it via the pip command; for example:<\/p>\n<pre class=\"crayon-plain-tag\">pip show mask-rcnn<\/pre>\n<p>You should see output informing you of the version and installation location; for example:<\/p>\n<pre class=\"crayon-plain-tag\">Name: mask-rcnn\r\nVersion: 2.1\r\nSummary: Mask R-CNN for object detection and instance segmentation\r\nHome-page: https:\/\/github.com\/matterport\/Mask_RCNN\r\nAuthor: Matterport\r\nAuthor-email: waleed.abdulla@gmail.com\r\nLicense: MIT\r\nLocation: ...\r\nRequires:\r\nRequired-by:<\/pre>\n<p>We are now ready to use the library.<\/p>\n<h3>Example of Object Localization<\/h3>\n<p>We are going to use a pre-trained Mask R-CNN model to detect objects on a new photograph.<\/p>\n<h4>Step 1. Download Model Weights<\/h4>\n<p>First, download the weights for the pre-trained model, specifically a Mask R-CNN trained on the MS Coco dataset.<\/p>\n<p>The weights are available from the project GitHub project and the file is about 250 megabytes. Download the model weights to a file with the name \u2018<em>mask_rcnn_coco.h5<\/em>\u2018 in your current working directory.<\/p>\n<ul>\n<li><a href=\"https:\/\/github.com\/matterport\/Mask_RCNN\/releases\/download\/v2.0\/mask_rcnn_coco.h5\">Download Weights (mask_rcnn_coco.h5)<\/a> (246 megabytes)<\/li>\n<\/ul>\n<h4>Step 2. Download Sample Photograph<\/h4>\n<p>We also need a photograph in which to detect objects.<\/p>\n<p>We will use a photograph from Flickr released under a permissive license, specifically a <a href=\"https:\/\/www.flickr.com\/photos\/viewfrom52\/2081198423\/\">photograph of an elephant taken by Mandy Goldberg<\/a>.<\/p>\n<p>Download the photograph to your current working directory with the filename \u2018<em>elephant.jpg<\/em>\u2018.<\/p>\n<div id=\"attachment_7692\" style=\"width: 650px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-7692\" class=\"size-full wp-image-7692\" src=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2019\/03\/elephant.jpg\" alt=\"Elephant\" width=\"640\" height=\"426\" srcset=\"http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2019\/03\/elephant.jpg 640w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2019\/03\/elephant-300x200.jpg 300w\" sizes=\"(max-width: 640px) 100vw, 640px\"><\/p>\n<p id=\"caption-attachment-7692\" class=\"wp-caption-text\">Elephant (elephant.jpg)<br \/>Taken by Mandy Goldberg, some rights reserved.<\/p>\n<\/div>\n<ul>\n<li><a href=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2019\/03\/elephant.jpg\">Download Photograph (elephant.jpg)<\/a><\/li>\n<\/ul>\n<h4>Step 3. Load Model and Make Prediction<\/h4>\n<p>First, the model must be defined via an instance <em>MaskRCNN<\/em> class.<\/p>\n<p>This class requires a configuration object as a parameter. The configuration object defines how the model might be used during training or inference.<\/p>\n<p>In this case, the configuration will only specify the number of images per batch, which will be one, and the number of classes to predict.<\/p>\n<p>You can see the full extent of the configuration object and the properties that you can override in the <a href=\"https:\/\/github.com\/matterport\/Mask_RCNN\/blob\/master\/mrcnn\/config.py\">config.py file<\/a>.<\/p>\n<pre class=\"crayon-plain-tag\"># define the test configuration\r\nclass TestConfig(Config):\r\n     NAME = \"test\"\r\n     GPU_COUNT = 1\r\n     IMAGES_PER_GPU = 1\r\n     NUM_CLASSES = 1 + 80<\/pre>\n<p>We can now define the <em>MaskRCNN<\/em> instance.<\/p>\n<p>We will define the model as type \u201c<em>inference<\/em>\u201d indicating that we are interested in making predictions and not training. We must also specify a directory where any log messages could be written, which in this case will be the current working directory.<\/p>\n<pre class=\"crayon-plain-tag\"># define the model\r\nrcnn = MaskRCNN(mode='inference', model_dir='.\/', config=TestConfig())<\/pre>\n<p>The next step is to load the weights that we downloaded.<\/p>\n<pre class=\"crayon-plain-tag\"># load coco model weights\r\nrcnn.load_weights('mask_rcnn_coco.h5', by_name=True)<\/pre>\n<p>Now we can make a prediction for our image. First, we can load the image and convert it to a NumPy array.<\/p>\n<pre class=\"crayon-plain-tag\"># load photograph\r\nimg = load_img('elephant.jpg')\r\nimg = img_to_array(img)<\/pre>\n<p>We can then make a prediction with the model. Instead of calling <em>predict()<\/em> as we would on a normal Keras model, will call the <em>detect()<\/em> function and pass it the single image.<\/p>\n<pre class=\"crayon-plain-tag\"># make prediction\r\nresults = rcnn.detect([img], verbose=0)<\/pre>\n<p>The result contains a dictionary for each image that we passed into the <em>detect()<\/em> function, in this case, a list of a single dictionary for the one image.<\/p>\n<p>The dictionary has keys for the bounding boxes, masks, and so on, and each key points to a list for multiple possible objects detected in the image.<\/p>\n<p>The keys of the dictionary of note are as follows:<\/p>\n<ul>\n<li>\u2018<em>rois<\/em>\u2018: The bound boxes or regions-of-interest (ROI) for detected objects.<\/li>\n<li>\u2018<em>masks<\/em>\u2018: The masks for the detected objects.<\/li>\n<li>\u2018<em>class_ids<\/em>\u2018: The class integers for the detected objects.<\/li>\n<li>\u2018<em>scores<\/em>\u2018: The probability or confidence for each predicted class.<\/li>\n<\/ul>\n<p>We can draw each box detected in the image by first getting the dictionary for the first image (e.g. <em>results[0]<\/em>), and then retrieving the list of bounding boxes (e.g. <em>[\u2018rois\u2019]<\/em>).<\/p>\n<pre class=\"crayon-plain-tag\">boxes = results[0]['rois']<\/pre>\n<p>Each bounding box is defined in terms of the bottom left and top right coordinates of the bounding box in the image<\/p>\n<pre class=\"crayon-plain-tag\">y1, x1, y2, x2 = boxes[0]<\/pre>\n<p>We can use these coordinates to create a <a href=\"https:\/\/matplotlib.org\/api\/_as_gen\/matplotlib.patches.Rectangle.html\">Rectangle() from the matplotlib API<\/a> and draw each rectangle over the top of our image.<\/p>\n<pre class=\"crayon-plain-tag\"># get coordinates\r\ny1, x1, y2, x2 = box\r\n# calculate width and height of the box\r\nwidth, height = x2 - x1, y2 - y1\r\n# create the shape\r\nrect = Rectangle((x1, y1), width, height, fill=False, color='red')\r\n# draw the box\r\nax.add_patch(rect)<\/pre>\n<p>To keep things neat, we can create a function to do this that will take the filename of the photograph and the list of bounding boxes to draw and will show the photo with the boxes.<\/p>\n<pre class=\"crayon-plain-tag\"># draw an image with detected objects\r\ndef draw_image_with_boxes(filename, boxes_list):\r\n     # load the image\r\n     data = pyplot.imread(filename)\r\n     # plot the image\r\n     pyplot.imshow(data)\r\n     # get the context for drawing boxes\r\n     ax = pyplot.gca()\r\n     # plot each box\r\n     for box in boxes_list:\r\n          # get coordinates\r\n          y1, x1, y2, x2 = box\r\n          # calculate width and height of the box\r\n          width, height = x2 - x1, y2 - y1\r\n          # create the shape\r\n          rect = Rectangle((x1, y1), width, height, fill=False, color='red')\r\n          # draw the box\r\n          ax.add_patch(rect)\r\n     # show the plot\r\n     pyplot.show()<\/pre>\n<p>We can now tie all of this together and load the pre-trained model and use it to detect objects in our photograph of an elephant, then draw the photograph with all detected objects.<\/p>\n<p>The complete example is listed below.<\/p>\n<pre class=\"crayon-plain-tag\"># example of inference with a pre-trained coco model\r\nfrom keras.preprocessing.image import load_img\r\nfrom keras.preprocessing.image import img_to_array\r\nfrom mrcnn.config import Config\r\nfrom mrcnn.model import MaskRCNN\r\nfrom matplotlib import pyplot\r\nfrom matplotlib.patches import Rectangle\r\n\r\n# draw an image with detected objects\r\ndef draw_image_with_boxes(filename, boxes_list):\r\n     # load the image\r\n     data = pyplot.imread(filename)\r\n     # plot the image\r\n     pyplot.imshow(data)\r\n     # get the context for drawing boxes\r\n     ax = pyplot.gca()\r\n     # plot each box\r\n     for box in boxes_list:\r\n          # get coordinates\r\n          y1, x1, y2, x2 = box\r\n          # calculate width and height of the box\r\n          width, height = x2 - x1, y2 - y1\r\n          # create the shape\r\n          rect = Rectangle((x1, y1), width, height, fill=False, color='red')\r\n          # draw the box\r\n          ax.add_patch(rect)\r\n     # show the plot\r\n     pyplot.show()\r\n\r\n# define the test configuration\r\nclass TestConfig(Config):\r\n     NAME = \"test\"\r\n     GPU_COUNT = 1\r\n     IMAGES_PER_GPU = 1\r\n     NUM_CLASSES = 1 + 80\r\n\r\n# define the model\r\nrcnn = MaskRCNN(mode='inference', model_dir='.\/', config=TestConfig())\r\n# load coco model weights\r\nrcnn.load_weights('mask_rcnn_coco.h5', by_name=True)\r\n# load photograph\r\nimg = load_img('elephant.jpg')\r\nimg = img_to_array(img)\r\n# make prediction\r\nresults = rcnn.detect([img], verbose=0)\r\n# visualize the results\r\ndraw_image_with_boxes('elephant.jpg', results[0]['rois'])<\/pre>\n<p>Running the example loads the model and performs object detection. More accurately, we have performed object localization, only drawing bounding boxes around detected objects.<\/p>\n<p>In this case, we can see that the model has correctly located the single object in the photo, the elephant, and drawn a red box around it.<\/p>\n<div id=\"attachment_7693\" style=\"width: 1290px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-7693\" class=\"size-full wp-image-7693\" src=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2019\/03\/Photograph-of-an-Elephant-with-All-Objects-Localized-With-a-Bounding-Box.png\" alt=\"Photograph of an Elephant With All Objects Localized With a Bounding Box\" width=\"1280\" height=\"960\" srcset=\"http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2019\/03\/Photograph-of-an-Elephant-with-All-Objects-Localized-With-a-Bounding-Box.png 1280w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2019\/03\/Photograph-of-an-Elephant-with-All-Objects-Localized-With-a-Bounding-Box-300x225.png 300w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2019\/03\/Photograph-of-an-Elephant-with-All-Objects-Localized-With-a-Bounding-Box-768x576.png 768w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2019\/03\/Photograph-of-an-Elephant-with-All-Objects-Localized-With-a-Bounding-Box-1024x768.png 1024w\" sizes=\"(max-width: 1280px) 100vw, 1280px\"><\/p>\n<p id=\"caption-attachment-7693\" class=\"wp-caption-text\">Photograph of an Elephant With All Objects Localized With a Bounding Box<\/p>\n<\/div>\n<h3>Example of Object Detection<\/h3>\n<p>Now that we know how to load the model and use it to make a prediction, let\u2019s update the example to perform real object detection.<\/p>\n<p>That is, in addition to localizing objects, we want to know what they are.<\/p>\n<p>The <em>Mask_RCNN<\/em> API provides a function called <em>display_instances()<\/em> that will take the array of pixel values for the loaded image and the aspects of the prediction dictionary, such as the bounding boxes, scores, and class labels, and will plot the photo with all of these annotations.<\/p>\n<p>One of the arguments is the list of predicted class identifiers available in the \u2018<em>class_ids<\/em>\u2018 key of the dictionary. The function also needs a mapping of ids to class labels. The pre-trained model was fit with a dataset that had 80 (81 including background) class labels, helpfully provided as a list in the <a href=\"https:\/\/github.com\/matterport\/Mask_RCNN\/blob\/master\/samples\/demo.ipynb\">Mask R-CNN Demo, Notebook Tutorial<\/a>, listed below.<\/p>\n<pre class=\"crayon-plain-tag\"># define 81 classes that the coco model knowns about\r\nclass_names = ['BG', 'person', 'bicycle', 'car', 'motorcycle', 'airplane',\r\n               'bus', 'train', 'truck', 'boat', 'traffic light',\r\n               'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird',\r\n               'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear',\r\n               'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie',\r\n               'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball',\r\n               'kite', 'baseball bat', 'baseball glove', 'skateboard',\r\n               'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup',\r\n               'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',\r\n               'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',\r\n               'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed',\r\n               'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote',\r\n               'keyboard', 'cell phone', 'microwave', 'oven', 'toaster',\r\n               'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors',\r\n               'teddy bear', 'hair drier', 'toothbrush']<\/pre>\n<p>We can then provide the details of the prediction for the elephant photo to the <em>display_instances()<\/em> function; for example:<\/p>\n<pre class=\"crayon-plain-tag\"># get dictionary for first prediction\r\nr = results[0]\r\n# show photo with bounding boxes, masks, class labels and scores\r\ndisplay_instances(img, r['rois'], r['masks'], r['class_ids'], class_names, r['scores'])<\/pre>\n<p>The <em>display_instances()<\/em> function is flexible, allowing you to only draw the mask or only the bounding boxes. You can learn more about this function in the <a href=\"https:\/\/github.com\/matterport\/Mask_RCNN\/blob\/master\/mrcnn\/visualize.py\">visualize.py source file<\/a>.<\/p>\n<p>The complete example with this change using the <em>display_instances()<\/em> function is listed below.<\/p>\n<pre class=\"crayon-plain-tag\"># example of inference with a pre-trained coco model\r\nfrom keras.preprocessing.image import load_img\r\nfrom keras.preprocessing.image import img_to_array\r\nfrom mrcnn.visualize import display_instances\r\nfrom mrcnn.config import Config\r\nfrom mrcnn.model import MaskRCNN\r\n\r\n# define 81 classes that the coco model knowns about\r\nclass_names = ['BG', 'person', 'bicycle', 'car', 'motorcycle', 'airplane',\r\n               'bus', 'train', 'truck', 'boat', 'traffic light',\r\n               'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird',\r\n               'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear',\r\n               'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie',\r\n               'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball',\r\n               'kite', 'baseball bat', 'baseball glove', 'skateboard',\r\n               'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup',\r\n               'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',\r\n               'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',\r\n               'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed',\r\n               'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote',\r\n               'keyboard', 'cell phone', 'microwave', 'oven', 'toaster',\r\n               'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors',\r\n               'teddy bear', 'hair drier', 'toothbrush']\r\n\r\n# define the test configuration\r\nclass TestConfig(Config):\r\n     NAME = \"test\"\r\n     GPU_COUNT = 1\r\n     IMAGES_PER_GPU = 1\r\n     NUM_CLASSES = 1 + 80\r\n\r\n# define the model\r\nrcnn = MaskRCNN(mode='inference', model_dir='.\/', config=TestConfig())\r\n# load coco model weights\r\nrcnn.load_weights('mask_rcnn_coco.h5', by_name=True)\r\n# load photograph\r\nimg = load_img('elephant.jpg')\r\nimg = img_to_array(img)\r\n# make prediction\r\nresults = rcnn.detect([img], verbose=0)\r\n# get dictionary for first prediction\r\nr = results[0]\r\n# show photo with bounding boxes, masks, class labels and scores\r\ndisplay_instances(img, r['rois'], r['masks'], r['class_ids'], class_names, r['scores'])<\/pre>\n<p>Running the example shows the photograph of the elephant with the annotations predicted by the Mask R-CNN model, specifically:<\/p>\n<ul>\n<li><strong>Bounding Box<\/strong>. Dotted bounding box around each detected object.<\/li>\n<li><strong>Class Label<\/strong>. Class label assigned each detected object written in the top left corner of the bounding box.<\/li>\n<li><strong>Prediction Confidence<\/strong>. Confidence of class label prediction for each detected object written in the top left corner of the bounding box.<\/li>\n<li><strong>Object Mask Outline<\/strong>. Polygon outline for the mask of each detected object.<\/li>\n<li><strong>Object Mask<\/strong>. Polygon fill for the mask of each detected object.<\/li>\n<\/ul>\n<p>The result is very impressive and sparks many ideas for how such a powerful pre-trained model could be used in practice.<\/p>\n<div id=\"attachment_7694\" style=\"width: 1034px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-7694\" class=\"size-large wp-image-7694\" src=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2019\/03\/Photograph-of-an-Elephant-With-All-Objects-Detected-With-a-Bounding-Box-and-Mask-1024x682.png\" alt=\"Photograph of an Elephant With All Objects Detected With a Bounding Box and Mask\" width=\"1024\" height=\"682\" srcset=\"http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2019\/03\/Photograph-of-an-Elephant-With-All-Objects-Detected-With-a-Bounding-Box-and-Mask-1024x682.png 1024w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2019\/03\/Photograph-of-an-Elephant-With-All-Objects-Detected-With-a-Bounding-Box-and-Mask-300x200.png 300w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2019\/03\/Photograph-of-an-Elephant-With-All-Objects-Detected-With-a-Bounding-Box-and-Mask-768x512.png 768w\" sizes=\"(max-width: 1024px) 100vw, 1024px\"><\/p>\n<p id=\"caption-attachment-7694\" class=\"wp-caption-text\">Photograph of an Elephant With All Objects Detected With a Bounding Box and Mask<\/p>\n<\/div>\n<h2>Further Reading<\/h2>\n<p>This section provides more resources on the topic if you are looking to go deeper.<\/p>\n<h3>Papers<\/h3>\n<ul>\n<li><a href=\"https:\/\/arxiv.org\/abs\/1311.2524\">Rich feature hierarchies for accurate object detection and semantic segmentation<\/a>, 2013.<\/li>\n<li><a href=\"https:\/\/arxiv.org\/abs\/1406.4729\">Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition<\/a>, 2014.<\/li>\n<li><a href=\"https:\/\/arxiv.org\/abs\/1504.08083\">Fast R-CNN<\/a>, 2015.<\/li>\n<li><a href=\"https:\/\/arxiv.org\/abs\/1506.01497\">Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks<\/a>, 2016.<\/li>\n<li><a href=\"https:\/\/arxiv.org\/abs\/1703.06870\">Mask R-CNN<\/a>, 2017.<\/li>\n<\/ul>\n<h3>API<\/h3>\n<ul>\n<li><a href=\"https:\/\/matplotlib.org\/api\/_as_gen\/matplotlib.patches.Rectangle.html\">matplotlib.patches.Rectangle API<\/a><\/li>\n<\/ul>\n<h3>Resources<\/h3>\n<ul>\n<li><a href=\"https:\/\/github.com\/matterport\/Mask_RCNN\">Mask R-CNN, GitHub<\/a>.<\/li>\n<li><a href=\"https:\/\/github.com\/matterport\/Mask_RCNN\/blob\/master\/samples\/demo.ipynb\">Mask R-CNN Demo, Notebook<\/a>.<\/li>\n<li><a href=\"https:\/\/github.com\/matterport\/Mask_RCNN\/blob\/master\/samples\/coco\/inspect_model.ipynb\">Mask R-CNN \u2013 Inspect Trained Model, Notebook<\/a>.<\/li>\n<\/ul>\n<h3>R-CNN Code Repositories<\/h3>\n<ul>\n<li><a href=\"https:\/\/github.com\/rbgirshick\/rcnn\">R-CNN: Regions with Convolutional Neural Network Features, GitHub<\/a>.<\/li>\n<li><a href=\"https:\/\/github.com\/rbgirshick\/fast-rcnn\">Fast R-CNN, GitHub<\/a>.<\/li>\n<li><a href=\"https:\/\/github.com\/rbgirshick\/py-faster-rcnn\">Faster R-CNN Python Code, GitHub<\/a>.<\/li>\n<li><a href=\"https:\/\/github.com\/facebookresearch\/Detectron\">Detectron, Facebook AI, GitHub<\/a>.<\/li>\n<\/ul>\n<h2>Summary<\/h2>\n<p>In this tutorial, you discovered how to use the Mask R-CNN model to detect objects in new photographs.<\/p>\n<p>Specifically, you learned:<\/p>\n<ul>\n<li>The region-based Convolutional Neural Network family of models for object detection and the most recent variation called Mask R-CNN.<\/li>\n<li>The best-of-breed open source library implementation of the Mask R-CNN for the Keras deep learning library.<\/li>\n<li>How to use a pre-trained Mask R-CNN to perform object localization and detection on new photographs.<\/li>\n<\/ul>\n<p>Do you have any questions?<br \/>\nAsk your questions in the comments below and I will do my best to answer.<\/p>\n<p>The post <a rel=\"nofollow\" href=\"https:\/\/machinelearningmastery.com\/how-to-perform-object-detection-in-photographs-with-mask-r-cnn-in-keras\/\">How to Perform Object Detection in Photographs Using Mask R-CNN with Keras<\/a> appeared first on <a rel=\"nofollow\" href=\"https:\/\/machinelearningmastery.com\/\">Machine Learning Mastery<\/a>.<\/p>\n<\/div>\n<p><a href=\"https:\/\/machinelearningmastery.com\/how-to-perform-object-detection-in-photographs-with-mask-r-cnn-in-keras\/\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Jason Brownlee Object detection is a task in computer vision that involves identifying the presence, location, and type of one or more objects in [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2019\/05\/23\/how-to-perform-object-detection-in-photographs-using-mask-r-cnn-with-keras\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":2180,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/2179"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=2179"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/2179\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/2180"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=2179"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=2179"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=2179"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}