{"id":5126,"date":"2021-10-20T04:00:00","date_gmt":"2021-10-20T04:00:00","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2021\/10\/20\/one-giant-leap-for-the-mini-cheetah\/"},"modified":"2021-10-20T04:00:00","modified_gmt":"2021-10-20T04:00:00","slug":"one-giant-leap-for-the-mini-cheetah","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2021\/10\/20\/one-giant-leap-for-the-mini-cheetah\/","title":{"rendered":"One giant leap for the mini cheetah"},"content":{"rendered":"<p>Author: Adam Zewe | MIT News Office<\/p>\n<div>\n<p>A loping cheetah dashes across a rolling field, bounding over sudden gaps in the rugged terrain. The movement may look effortless, but getting a robot to move this way is an altogether different prospect.<\/p>\n<\/p>\n<p>In recent years, four-legged robots inspired by the movement of cheetahs and other animals have made great leaps forward, yet they still lag behind their mammalian counterparts when it comes to traveling across a landscape with rapid elevation changes.<\/p>\n<\/p>\n<p>\u201cIn those settings, you need to use vision in order to avoid failure. For example, stepping in a gap is difficult to avoid if you can\u2019t see it. Although there are some existing methods for incorporating vision into legged locomotion, most of them aren\u2019t really suitable for use with emerging agile robotic systems,\u201d says Gabriel Margolis, a PhD student in the lab of Pulkit Agrawal, professor in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT.<\/p>\n<\/p>\n<p>Now, Margolis and his collaborators have developed a <a href=\"https:\/\/sites.google.com\/view\/jumpingfrompixels\">system that improves the speed and agility of legged robots<\/a> as they jump across gaps in the terrain. The novel control system is split into two parts \u2014 one that processes real-time input from a video camera mounted on the front of the robot and another that translates that information into instructions for how the robot should move its body. The researchers tested their system on the MIT mini cheetah, a powerful, agile robot built in the lab of Sangbae Kim, professor of mechanical engineering.<\/p>\n<p>Unlike other methods for controlling a four-legged robot, this two-part system does not require the terrain to be mapped in advance, so the robot can go anywhere. In the future, this could enable robots to charge off into the woods on an emergency response mission or climb a flight of stairs to deliver medication to an elderly shut-in.<\/p>\n<\/p>\n<p>Margolis wrote the paper with senior author Pulkit Agrawal, who heads the Improbable AI lab at MIT and is the Steven G. and Renee Finn Career Development Assistant Professor in the Department of Electrical Engineering and Computer Science; Professor Sangbae Kim in the Department of Mechanical Engineering at MIT; and fellow graduate students Tao Chen and Xiang Fu at MIT. Other co-authors include Kartik Paigwar, a graduate student at Arizona State University; and Donghyun Kim, an assistant professor at the University of Massachusetts at Amherst. The work will be presented next month at the Conference on Robot Learning.<\/p>\n<p><strong>It\u2019s all under control<\/strong><\/p>\n<\/p>\n<p>The use of two separate controllers working together makes this system especially innovative.<\/p>\n<\/p>\n<p>A controller is an algorithm that will convert the robot\u2019s state into a set of actions for it to follow. Many blind controllers \u2014 those that do not incorporate vision \u2014 are robust and effective but only enable robots to walk over continuous terrain.<\/p>\n<\/p>\n<p>Vision is such a complex sensory input to process that these algorithms are unable to handle it efficiently. Systems that do incorporate vision usually rely on a \u201cheightmap\u201d of the terrain, which must be either preconstructed or generated on the fly, a process that is typically slow and prone to failure if the heightmap is incorrect.<\/p>\n<\/p>\n<p>To develop their system, the researchers took the best elements from these robust, blind controllers and combined them with a separate module that handles vision in real-time.<\/p>\n<\/p>\n<p>The robot\u2019s camera captures depth images of the upcoming terrain, which are fed to a high-level controller along with information about the state of the robot\u2019s body (joint angles, body orientation, etc.). The high-level controller is a <a href=\"https:\/\/news.mit.edu\/2017\/explained-neural-networks-deep-learning-0414\">neural network<\/a> that \u201clearns\u201d from experience.<\/p>\n<\/p>\n<p>That neural network outputs a target trajectory, which the second controller uses to come up with torques for each of the robot\u2019s 12 joints. This low-level controller is not a neural network and instead relies on a set of concise, physical equations that describe the robot\u2019s motion.<\/p>\n<\/p>\n<p>\u201cThe hierarchy, including the use of this low-level controller, enables us to constrain the robot\u2019s behavior so it is more well-behaved. With this low-level controller, we are using well-specified models that we can impose constraints on, which isn\u2019t usually possible in a learning-based network,\u201d Margolis says.<\/p>\n<\/p>\n<p><strong>Teaching the network<\/strong><\/p>\n<\/p>\n<p>The researchers used the trial-and-error method known as reinforcement learning to train the high-level controller. They conducted simulations of the robot running across hundreds of different discontinuous terrains<strong> <\/strong>and rewarded it for successful crossings.<\/p>\n<\/p>\n<p>Over time, the algorithm learned which actions maximized the reward.<\/p>\n<\/p>\n<p>Then they built a physical, gapped terrain with a set of wooden planks and put their control scheme to the test using the mini cheetah.<\/p>\n<\/p>\n<p>\u201cIt was definitely fun to work with a robot that was designed in-house at MIT by some of our collaborators. The mini cheetah is a great platform because it is modular and made mostly from parts that you can order online, so if we wanted a new battery or camera, it was just a simple matter of ordering it from a regular supplier and, with a little bit of help from Sangbae\u2019s lab, installing it,\u201d Margolis says.<\/p>\n<\/p>\n<p>Estimating the robot\u2019s state proved to be a challenge in some cases. Unlike in simulation, real-world sensors encounter noise that can accumulate and affect the outcome. So, for some experiments that involved high-precision foot placement, the researchers used a motion capture system to measure the robot\u2019s true position.<\/p>\n<\/p>\n<p>Their system outperformed others that only use one controller, and the mini cheetah successfully crossed 90 percent of the terrains.<\/p>\n<\/p>\n<p>\u201cOne novelty of our system is that it does adjust the robot\u2019s gait. If a human were trying to leap across a really wide gap, they might start by running really fast to build up speed and then they might put both feet together to have a really powerful leap across the gap. In the same way, our robot can adjust the timings and duration of its foot contacts to better traverse the terrain,\u201d Margolis says.<\/p>\n<\/p>\n<p><strong>Leaping out of the lab<\/strong><\/p>\n<\/p>\n<p>While the researchers were able to demonstrate that their control scheme works in a laboratory, they still have a long way to go before they can deploy the system in the real world, Margolis says.<\/p>\n<\/p>\n<p>In the future, they hope to mount a more powerful computer to the robot so it can do all its computation on board. They also want to improve the robot\u2019s state estimator to eliminate the need for the motion capture system. In addition, they\u2019d like to improve the low-level controller so it can exploit the robot\u2019s full range of motion, and enhance the high-level controller so it works well in different lighting conditions.<\/p>\n<\/p>\n<p>\u201cIt is remarkable to witness the flexibility\u00a0of machine learning techniques capable of bypassing carefully designed intermediate processes (e.g. state estimation and trajectory planning) that\u00a0centuries-old model-based techniques have relied on,\u201d Kim says. \u201cI am excited about the future of mobile robots with more robust vision processing trained specifically for locomotion.\u201d<\/p>\n<\/p>\n<p>The research is supported, in part, by the MIT\u2019s Improbable AI Lab, Biomimetic Robotics Laboratory, NAVER LABS, and the DARPA Machine Common Sense Program.<\/p>\n<\/div>\n<p><a href=\"https:\/\/news.mit.edu\/2021\/one-giant-leap-mini-cheetah-1020\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Adam Zewe | MIT News Office A loping cheetah dashes across a rolling field, bounding over sudden gaps in the rugged terrain. The movement [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2021\/10\/20\/one-giant-leap-for-the-mini-cheetah\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":465,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/5126"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=5126"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/5126\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/459"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=5126"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=5126"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=5126"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}