{"id":5847,"date":"2022-08-21T04:00:00","date_gmt":"2022-08-21T04:00:00","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2022\/08\/21\/bringing-lessons-from-cybersecurity-to-the-fight-against-disinformation\/"},"modified":"2022-08-21T04:00:00","modified_gmt":"2022-08-21T04:00:00","slug":"bringing-lessons-from-cybersecurity-to-the-fight-against-disinformation","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2022\/08\/21\/bringing-lessons-from-cybersecurity-to-the-fight-against-disinformation\/","title":{"rendered":"Bringing lessons from cybersecurity to the fight against disinformation"},"content":{"rendered":"<p>Author: Kylie Foy | MIT Lincoln Laboratory<\/p>\n<div>\n<p>Mary Ellen Zurko remembers the feeling of disappointment. Not long after earning her bachelor\u2019s degree from MIT, she was working her first job of evaluating secure computer systems for the U.S. government. The goal was to determine whether systems were compliant with the \u201cOrange Book,\u201d the government\u2019s authoritative manual on cybersecurity at the time. Were the systems technically secure? Yes. In practice? Not so much. \u00a0<\/p>\n<p>\u201cThere was no concern whatsoever for whether the security demands on end users were at all realistic,\u201d says Zurko. \u201cThe notion of a secure system was about the technology, and it assumed perfect, obedient humans.\u201d<\/p>\n<p>That discomfort started her on a track that would define Zurko\u2019s career. In 1996, after a return to MIT for a master\u2019s in computer science, she published an influential paper introducing the term \u201cuser-centered security.\u201d It grew into a field of its own, concerned with making sure that cybersecurity is balanced with usability, or else humans might circumvent security protocols and give attackers a foot in the door. Lessons from usable security now surround us, influencing the design of phishing warnings when we visit an insecure site or the invention of the \u201cstrength\u201d bar when we type a desired password.<\/p>\n<p>Now a cybersecurity researcher at <a href=\"https:\/\/www.ll.mit.edu\/\">MIT Lincoln Laboratory<\/a>, Zurko is still enmeshed in humans\u2019 relationship with computers. Her focus has shifted toward technology to counter influence operations, or attempts by foreign adversaries to deliberately spread false information (disinformation) on social media, with the intent of disrupting U.S. ideals.<\/p>\n<p>In a recent <a href=\"https:\/\/ieeexplore.ieee.org\/document\/9782833\">editorial published in <em>IEEE Security &amp; Privacy<\/em><\/a>, Zurko argues that many of the \u201chuman problems\u201d within the usable security field have similarities to the problems of tackling disinformation. To some extent, she is facing a similar undertaking as that in her early career: convincing peers that such human issues are cybersecurity issues, too.<\/p>\n<p>\u201cIn cybersecurity, attackers use humans as one means to subvert a technical system. Disinformation campaigns are meant to impact human decision-making; they\u2019re sort of the ultimate use of cyber technology to subvert humans,\u201d she says. \u201cBoth use computer technology and humans to get to a goal. It&#8217;s only the goal that&#8217;s different.\u201d<\/p>\n<p><strong>Getting ahead of influence operations<\/strong><\/p>\n<p>Research in counteracting online influence operations is still young. Three years ago, Lincoln Laboratory initiated a study on the topic to understand its implications for national security. The field has since ballooned, notably since the spread of dangerous, misleading Covid-19 claims online, perpetuated in some cases by China and Russia, as one <a href=\"https:\/\/www.rand.org\/pubs\/research_reports\/RRA112-21.html\">RAND study found<\/a>. There is now dedicated funding through the laboratory\u2019s <a href=\"https:\/\/www.ll.mit.edu\/r-d\/technology-office\">Technology Office<\/a> toward developing influence operations countermeasures.<\/p>\n<p>\u201cIt&#8217;s important for us to strengthen our democracy and make all our citizens resilient to the kinds of disinformation campaigns targeted at them by international adversaries, who seek to disrupt our internal processes,\u201d Zurko says.<\/p>\n<p>Like cyberattacks, influence operations often follow a multistep path, called a kill chain, to exploit predictable weaknesses. Studying and reinforcing those weaknesses can work in fighting influence operations, just as they do in cyber defense. Lincoln Laboratory\u2019s efforts are in developing technology to support \u201csource tending,\u201d or reinforcing early stages in the kill chain when adversaries begin to find opportunities for a divisive or misleading narrative and build accounts to amplify it. Source tending helps cue U.S. information-operations personnel of a brewing disinformation campaign.<\/p>\n<p>A couple of approaches at the laboratory are aimed at source tending. One approach is leveraging machine learning to study digital personas, with the intent of identifying when the same person is behind multiple, malicious accounts. Another area is focusing on building computational models that can identify deepfakes, or AI-generated videos and photos created to mislead viewers. Researchers are also developing tools to automatically identify which accounts hold the most influence over a narrative. First, the tools identify a narrative (<a href=\"https:\/\/arxiv.org\/abs\/2005.10879\">in one paper<\/a>, the researchers studied the disinformation campaign against French presidential candidate Emmanuel Macron) and gather data related to that narrative, such as keywords, retweets, and likes. Then, they use an analytical technique called causal network analysis to define and rank the influence of specific accounts \u2014 which accounts often generate posts that go viral?<\/p>\n<p>These technologies are feeding into the work that Zurko is leading to develop a counter-influence operations test bed. The goal is to create a safe space to simulate social media environments and test counter-technologies. Most importantly, the test bed will allow human operators to be put into the loop to see how well new technologies help them do their jobs.<\/p>\n<p>\u201cOur military\u2019s information-operations personnel are lacking a way to measure impact. By standing up a test bed, we can use multiple different technologies, in a repeatable fashion, to grow metrics that let us see if these technologies actually make operators more effective in identifying a disinformation campaign and the actors behind it.\u201d<\/p>\n<p>This vision is still aspirational as the team builds up the test bed environment. Simulating social media users and what Zurko calls the \u201cgrey cell,\u201d the unwitting participants to online influence, is one of the greatest challenges to emulating real-world conditions. Reconstructing social media platforms is also a challenge; each platform has its own policies for dealing with disinformation and proprietary algorithms that influence disinformation\u2019s reach. For example, <em>The Washington Post<\/em> <a href=\"https:\/\/www.washingtonpost.com\/technology\/2021\/10\/26\/facebook-angry-emoji-algorithm\/\">reported<\/a> that Facebook\u2019s algorithm gave \u201cextra value\u201d to news that received anger reactions, making it five times more likely to appear on a user\u2019s news feed \u2014 and such content is disproportionately likely to include misinformation. These often-hidden dynamics are important to replicate in a test bed, both to study the spread of fake news and understand the impact of interventions.<\/p>\n<p><strong>Taking a full-system approach<\/strong><\/p>\n<p>In addition to building a test bed to combine new ideas, Zurko is also advocating for a unified space that disinformation researchers can call their own. Such a space would allow researchers in sociology, psychology, policy, and law to come together and share cross-cutting aspects of their work alongside cybersecurity experts. The best defenses against disinformation will require this diversity of expertise, Zurko says, and \u201ca full-system approach of both human-centered and technical defenses.\u201d<\/p>\n<p>Though this space doesn\u2019t yet exist, it\u2019s likely on the horizon as the field continues to grow. Influence operations research is gaining traction in the cybersecurity world. \u201cJust recently, the top conferences have begun putting disinformation research in their call for papers, which is a real indicator of where things are going,\u201d Zurko says. \u201cBut, some people still hold on to the old-school idea that messy humans don\u2019t have anything to do with cybersecurity.\u201d<\/p>\n<p>Despite those sentiments, Zurko still trusts her early observation as a researcher \u2014 what cyber technology can do effectively is moderated by how people use it. She wants to continue to design technology, and approach problem-solving, in a way that places humans center-frame. \u201cFrom the very start, what I loved about cybersecurity is that it\u2019s partly mathematical rigor and partly sitting around the \u2018campfire\u2019 telling stories and learning from one another,\u201d Zurko reflects. Disinformation gets its power from humans\u2019 ability to influence each other; that ability may also just be the most powerful defense we have.<\/p>\n<\/div>\n<p><a href=\"https:\/\/news.mit.edu\/2022\/bringing-lessons-cybersecurity-fight-against-disinformation-0821\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Kylie Foy | MIT Lincoln Laboratory Mary Ellen Zurko remembers the feeling of disappointment. Not long after earning her bachelor\u2019s degree from MIT, she [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2022\/08\/21\/bringing-lessons-from-cybersecurity-to-the-fight-against-disinformation\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":472,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/5847"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=5847"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/5847\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/460"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=5847"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=5847"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=5847"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}