{"id":2772,"date":"2019-11-04T05:00:00","date_gmt":"2019-11-04T05:00:00","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2019\/11\/04\/technique-helps-robots-find-the-front-door\/"},"modified":"2019-11-04T05:00:00","modified_gmt":"2019-11-04T05:00:00","slug":"technique-helps-robots-find-the-front-door","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2019\/11\/04\/technique-helps-robots-find-the-front-door\/","title":{"rendered":"Technique helps robots find the front door"},"content":{"rendered":"<p>Author: Jennifer Chu | MIT News Office<\/p>\n<div>\n<p>In the not too distant future, robots may be dispatched as last-mile delivery vehicles to drop your takeout order, package, or meal-kit subscription at your doorstep \u2014 if they can find the door.<\/p>\n<p>Standard approaches for robotic navigation involve mapping an area ahead of time, then using algorithms to guide a robot toward a specific goal or GPS coordinate on the map. While this approach might make sense for exploring specific environments, such as the layout of a particular building or planned obstacle course, it can become unwieldy in the context of last-mile delivery.<\/p>\n<p>Imagine, for instance, having to map in advance every single neighborhood within a robot\u2019s delivery zone, including the configuration of each house within that neighborhood along with the specific coordinates of each house\u2019s front door. Such a task can be difficult to scale to an entire city, particularly as the exteriors of houses often change with the seasons. Mapping every single house could also run into issues of security and privacy.<\/p>\n<p>Now MIT engineers have developed a navigation method that doesn\u2019t require mapping an area in advance. Instead, their approach enables a robot to use clues in its environment to plan out a route to its destination, which can be described in general semantic terms, such as \u201cfront door\u201d or \u201cgarage,\u201d rather than as coordinates on a map. For example, if a robot is instructed to deliver a package to someone&#8217;s front door, it might start on the road and see a driveway, which it has been trained to recognize as likely to lead toward a sidewalk, which in turn is likely to lead to the front door.<\/p>\n<div class=\"cms-placeholder-content-video\"><\/div>\n<p>The new technique can greatly reduce the time a robot spends exploring a property before identifying its target, and it doesn\u2019t rely on maps of specific residences.\u00a0<\/p>\n<p>\u201cWe wouldn\u2019t want to have to make a map of every building that we\u2019d need to visit,\u201d says Michael Everett, a graduate student in MIT\u2019s Department of Mechanical Engineering. \u201cWith this technique, we hope to drop a robot at the end of any driveway and have it find a door.\u201d<\/p>\n<p>Everett will present the group\u2019s results this week at the International Conference on Intelligent Robots and Systems. The paper, which is co-authored by Jonathan How, professor of aeronautics and astronautics at MIT, and Justin Miller of the Ford Motor Company, is a finalist for \u201cBest Paper for Cognitive Robots.\u201d<\/p>\n<p><strong>\u201cA sense of what things are\u201d<\/strong><\/p>\n<p>In recent years, researchers have worked on introducing natural, semantic language to robotic systems, training robots to recognize objects by their semantic labels, so they can visually process a door as a door, for example, and not simply as a solid, rectangular obstacle.<\/p>\n<p>\u201cNow we have an ability to give robots a sense of what things are, in real-time,\u201d Everett says.<\/p>\n<p>Everett, How, and Miller are using similar semantic techniques as a springboard for their new navigation approach, which leverages pre-existing algorithms that extract features from visual data to generate a new map of the same scene, represented as semantic clues, or context.<\/p>\n<p>In their case, the researchers used an algorithm to build up a map of the environment as the robot moved around, using the semantic labels of each object and a depth image. This algorithm is called semantic SLAM (Simultaneous Localization and Mapping).<\/p>\n<p>While other semantic algorithms have enabled robots to recognize and map objects in their environment for what they are, they haven\u2019t allowed a robot to make decisions in the moment while navigating a new environment, on the most efficient path to take to a semantic destination such as a \u201cfront door.\u201d<\/p>\n<p>\u201cBefore, exploring was just, plop a robot down and say \u2018go,\u2019 and it will move around and eventually get there, but it will be slow,\u201d How says.<\/p>\n<p><strong>The cost to go<\/strong><\/p>\n<p>The researchers looked to speed up a robot\u2019s path-planning through a semantic, context-colored world. They developed a new \u201ccost-to-go estimator,\u201d an algorithm that converts a semantic map created by preexisting SLAM algorithms into a second map, representing the likelihood of any given location being close to the goal.<\/p>\n<p>\u201cThis was inspired by image-to-image translation, where you take a picture of a cat and make it look like a dog,\u201d Everett says. \u201cThe same type of idea happens here where you take one image that looks like a map of the world, and turn it into this other image that looks like the map of the world but now is colored based on how close different points of the map are to the end goal.\u201d<\/p>\n<p>This cost-to-go map is colorized, in gray-scale, to represent darker regions as locations far from a goal, and lighter regions as areas that are close to the goal. For instance, the sidewalk, coded in yellow in a semantic map, might be translated by the cost-to-go algorithm as a darker region in the new map, compared with a driveway, which is progressively lighter as it approaches the front door \u2014 the lightest region in the new map.<\/p>\n<p>The researchers trained this new algorithm on satellite images from Bing Maps containing 77 houses from one urban and three suburban neighborhoods. The system converted a semantic map into a cost-to-go map, and mapped out the most efficient path, following lighter regions in the map, to the end goal. For each satellite image, Everett assigned semantic labels and colors to context features in a typical front yard, such as grey for a front door, blue for a driveway, and green for a hedge.<\/p>\n<p>During this training process, the team also applied masks to each image to mimic the partial view that a robot\u2019s camera would likely have as it traverses a yard.<\/p>\n<p>\u201cPart of the trick to our approach was [giving the system] lots of partial images,\u201d How explains. \u201cSo it really had to figure out how all this stuff was interrelated. That\u2019s part of what makes this work robustly.\u201d<\/p>\n<p>The researchers then tested their approach in a simulation of an image of an entirely new house, outside of the training dataset, first using the preexisting SLAM algorithm to generate a semantic map, then applying their new cost-to-go estimator to generate a second map, and path to a goal, in this case, the front door.<\/p>\n<p>The group\u2019s new cost-to-go technique found the front door 189 percent faster than classical navigation algorithms, which do not take context or semantics into account, and instead spend excessive steps exploring areas that are unlikely to be near their goal.<\/p>\n<p>Everett says the results illustrate how robots can use context to efficiently locate a goal, even in unfamiliar, unmapped environments.<\/p>\n<p>\u201cEven if a robot is delivering a package to an environment it\u2019s never been to, there might be clues that will be the same as other places it\u2019s seen,\u201d Everett says. \u201cSo the world may be laid out a little differently, but there\u2019s probably some things in common.\u201d<\/p>\n<p>This research is supported, in part, by the Ford Motor Company.<\/p>\n<\/div>\n<p><a href=\"http:\/\/news.mit.edu\/2019\/technique-helps-robots-find-front-door-1104\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Jennifer Chu | MIT News Office In the not too distant future, robots may be dispatched as last-mile delivery vehicles to drop your takeout [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2019\/11\/04\/technique-helps-robots-find-the-front-door\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":467,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/2772"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=2772"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/2772\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/464"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=2772"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=2772"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=2772"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}