{"id":2828,"date":"2019-11-18T21:10:01","date_gmt":"2019-11-18T21:10:01","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2019\/11\/18\/predicting-peoples-driving-personalities\/"},"modified":"2019-11-18T21:10:01","modified_gmt":"2019-11-18T21:10:01","slug":"predicting-peoples-driving-personalities","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2019\/11\/18\/predicting-peoples-driving-personalities\/","title":{"rendered":"Predicting people&#8217;s driving personalities"},"content":{"rendered":"<p>Author: Adam Conner-Simons | Rachel Gordon | MIT CSAIL<\/p>\n<div>\n<p>Self-driving cars are coming. But for all their fancy sensors and intricate data-crunching abilities, even the most cutting-edge cars lack something that (almost) every 16-year-old with a learner\u2019s permit has: social awareness.<\/p>\n<p>While autonomous technologies have improved substantially, they still ultimately view the drivers around them as obstacles made up of ones and zeros, rather than human beings with specific intentions, motivations, and personalities.<\/p>\n<p>But recently a team led by researchers at MIT\u2019s <a href=\"http:\/\/csail.mit.edu\/\">Computer Science and Artificial Intelligence Laboratory<\/a> (CSAIL) has been exploring whether self-driving cars can be programmed to classify the social personalities of other drivers, so that they can better predict what different cars will do \u2014 and, therefore, be able to drive more safely among them.<\/p>\n<p>In a new paper, the scientists integrated tools from social psychology to classify driving behavior with respect to how selfish or selfless a particular driver is.<\/p>\n<p>Specifically, they used something called social value orientation (SVO), which represents the degree to which someone is selfish (\u201cegoistic\u201d) versus altruistic or cooperative (\u201cprosocial\u201d). The system then estimates drivers\u2019 SVOs to create real-time driving trajectories for self-driving cars.<\/p>\n<p>Testing their algorithm on the tasks of merging lanes and making unprotected left turns, the team showed that they could better predict the behavior of other cars by a factor of 25 percent. For example, in the left-turn simulations their car knew to wait when the approaching car had a more egoistic driver, and to then make the turn when the other car was more prosocial.<\/p>\n<p>While not yet robust enough to be implemented on real roads, the system could have some intriguing use cases, and not just for the cars that drive themselves. Say you\u2019re a human driving along and a car suddenly enters your blind spot \u2014 the system could give you a warning in the rear-view mirror that the car has an aggressive driver, allowing you to adjust accordingly. It could also allow self-driving cars to actually learn to exhibit more human-like behavior that will be easier for human drivers to understand.<\/p>\n<p>\u201cWorking with and around humans means figuring out their intentions to better understand their behavior,\u201d says graduate student Wilko Schwarting, who was lead author on the new paper that will be published this week in the latest issue of the <em>Proceedings of the National Academy of Sciences<\/em>. \u201cPeople\u2019s tendencies to be collaborative or competitive often spills over into how they behave as drivers. In this paper, we sought to understand if this was something we could actually quantify.\u201d<\/p>\n<p>Schwarting\u2019s co-authors include MIT professors Sertac Karaman and Daniela Rus, as well as research scientist Alyssa Pierson and former CSAIL postdoc Javier Alonso-Mora.<\/p>\n<p>A central issue with today\u2019s self-driving cars is that they\u2019re programmed to assume that all humans act the same way. This means that, among other things, they\u2019re quite conservative in their decision-making at four-way stops and other intersections.<\/p>\n<p>While this caution reduces the chance of fatal accidents, it also <a href=\"https:\/\/www.popsci.com\/self-driving-cars-unprotected-left-turns\/\">creates bottlenecks<\/a> that can be frustrating for other drivers, not to mention hard for them to understand. (This may be why the majority of traffic incidents have involved <a href=\"https:\/\/twitter.com\/wired\/status\/1053031276251234305\">getting rear-ended by impatient drivers<\/a>.)<\/p>\n<p>\u201cCreating more human-like behavior in autonomous vehicles (AVs) is fundamental for the safety of passengers and surrounding vehicles, since behaving in a predictable manner enables humans to understand and appropriately respond to the AV\u2019s actions,\u201d says Schwarting.<\/p>\n<p>To try to expand the car\u2019s social awareness, the CSAIL team combined methods from social psychology with game theory, a theoretical framework for conceiving social situations among competing players.<\/p>\n<p>The team modeled road scenarios where each driver tried to maximize their own utility and analyzed their \u201cbest responses\u201d given the decisions of all other agents. Based on that small snippet of motion from other cars, the team\u2019s algorithm could then predict the surrounding cars\u2019 behavior as cooperative, altruistic, or egoistic \u2014 grouping the first two as \u201cprosocial.\u201d People\u2019s scores for these qualities rest on a continuum with respect to how much a person demonstrates care for themselves versus care for others.<\/p>\n<p>In the merging and left-turn scenarios, the two outcome options were to either let somebody merge into your lane (\u201cprosocial\u201d) or not (\u201cegoistic\u201d). The team\u2019s results showed that, not surprisingly, merging cars are deemed more competitive than non-merging cars.<\/p>\n<p>The system was trained to try to better understand when it\u2019s appropriate to exhibit different behaviors. For example, even the most deferential of human drivers knows that certain types of actions \u2014 like making a lane change in heavy traffic \u2014 require a moment of being more assertive and decisive.<\/p>\n<p>For the next phase of the research, the team plans to work to apply their model to pedestrians, bicycles, and other agents in driving environments. In addition, they will be investigating other robotic systems acting among humans, such as household robots, and integrating SVO into their prediction and decision-making algorithms. Pierson says that the ability to estimate SVO distributions directly from observed motion, instead of in laboratory conditions, will be important for fields far beyond autonomous driving.<\/p>\n<p>\u201cBy modeling driving personalities and incorporating the models mathematically using the SVO in the decision-making module of a robot car, this work opens the door to safer and more seamless road-sharing between human-driven and robot-driven cars,\u201d says Rus.<\/p>\n<p>The research was supported by the Toyota Research Institute for the MIT team. The Netherlands Organization for Scientific Research provided support for the specific participation of Mora.<\/p>\n<\/div>\n<p><a href=\"http:\/\/news.mit.edu\/2019\/predicting-driving-personalities-1118\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Adam Conner-Simons | Rachel Gordon | MIT CSAIL Self-driving cars are coming. But for all their fancy sensors and intricate data-crunching abilities, even the [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2019\/11\/18\/predicting-peoples-driving-personalities\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":474,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/2828"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=2828"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/2828\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/459"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=2828"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=2828"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=2828"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}