{"id":2050,"date":"2019-04-23T15:00:01","date_gmt":"2019-04-23T15:00:01","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2019\/04\/23\/improving-security-as-artificial-intelligence-moves-to-smartphones\/"},"modified":"2019-04-23T15:00:01","modified_gmt":"2019-04-23T15:00:01","slug":"improving-security-as-artificial-intelligence-moves-to-smartphones","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2019\/04\/23\/improving-security-as-artificial-intelligence-moves-to-smartphones\/","title":{"rendered":"Improving security as artificial intelligence moves to smartphones"},"content":{"rendered":"<p>Author: Kim Martineau | MIT Quest for Intelligence<\/p>\n<div>\n<p>Smartphones, security cameras, and speakers are just a few of the devices that will soon be running more artificial intelligence software to speed up image- and speech-processing tasks. A compression technique known as quantization is smoothing the way by making deep learning models smaller to reduce computation and energy costs. But smaller models, it turns out, make it easier for malicious attackers to trick an AI system into misbehaving \u2014 a concern as more complex decision-making is handed off to machines.\u00a0<\/p>\n<p>In a\u00a0<a href=\"https:\/\/openreview.net\/pdf?id=ryetZ20ctX\">new study<\/a>, MIT and IBM researchers show just how vulnerable compressed AI models are to adversarial attack, and they offer a fix: add a mathematical constraint during the quantization process to reduce the odds that an AI will fall prey to a slightly modified image and misclassify what they see.\u00a0<\/p>\n<p>When a deep learning model is reduced from the standard 32 bits to\u00a0a lower bit length, it\u2019s\u00a0more likely to misclassify altered images due to an error amplification\u00a0effect: The manipulated image becomes more distorted with each extra layer of processing. By the end, the model is more likely to mistake a bird for a cat, for example, or a frog for a deer.\u00a0\u00a0\u00a0<\/p>\n<p>Models quantized to 8 bits or fewer are more susceptible to adversarial attacks, the researchers show, with accuracy falling from an already low 30-40 percent to less than 10 percent as bit width declines. But controlling the Lipschitz constraint during quantization restores some resilience. When the researchers added the constraint, they saw small performance gains in an attack, with the smaller models in some cases outperforming the 32-bit model.\u00a0<\/p>\n<p>\u201cOur technique limits error amplification and can even make compressed deep learning models more robust than full-precision models,\u201d says\u00a0<a href=\"https:\/\/songhan.mit.edu\/\">Song Han<\/a>, an assistant professor in MIT\u2019s\u00a0<a href=\"https:\/\/www.eecs.mit.edu\/\">Department of Electrical Engineering and Computer Science<\/a>\u00a0and a member of MIT\u2019s\u00a0<a href=\"https:\/\/www.mtl.mit.edu\/\">Microsystems Technology Laboratories<\/a>. \u201cWith proper quantization, we can\u00a0limit\u00a0the error.\u201d\u00a0<\/p>\n<p>The team plans to further improve the technique by training it on larger datasets and applying it to a wider range of models. \u201cDeep learning models need to be fast and secure as they move into a world of internet-connected devices,\u201d says study coauthor <a href=\"http:\/\/people.csail.mit.edu\/ganchuang\/\">Chuang Gan<\/a>, a researcher at the MIT-IBM Watson AI Lab. \u201cOur Defensive Quantization technique helps on both fronts.\u201d<\/p>\n<p>The researchers, who include MIT graduate student Ji Lin, present their results at the\u00a0<a href=\"https:\/\/iclr.cc\/\">International Conference on Learning Representations<\/a>\u00a0in May.<\/p>\n<p>In making AI models smaller so that they run faster and use less energy, Han is using AI itself to push the limits of model compression technology. In related\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/1811.08886.pdf\">recent work<\/a>, Han and his colleagues show how reinforcement learning can be used to automatically find the smallest bit length for each layer in a quantized model based on how quickly the device running the model can process images. This flexible bit width approach reduces latency and energy use by as much as 200 percent compared to a fixed, 8-bit model, says Han. The researchers will present their results at the\u00a0<a href=\"http:\/\/cvpr2019.thecvf.com\/\">Computer Vision and Pattern Recognition<\/a>\u00a0conference in June.\u00a0<\/p>\n<\/div>\n<p><a href=\"http:\/\/news.mit.edu\/2019\/improving-security-ai-moves-to-smartphones-0423\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Kim Martineau | MIT Quest for Intelligence Smartphones, security cameras, and speakers are just a few of the devices that will soon be running [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2019\/04\/23\/improving-security-as-artificial-intelligence-moves-to-smartphones\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":463,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/2050"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=2050"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/2050\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/457"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=2050"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=2050"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=2050"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}