{"id":7600,"date":"2024-09-19T04:00:00","date_gmt":"2024-09-19T04:00:00","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2024\/09\/19\/study-ai-could-lead-to-inconsistent-outcomes-in-home-surveillance\/"},"modified":"2024-09-19T04:00:00","modified_gmt":"2024-09-19T04:00:00","slug":"study-ai-could-lead-to-inconsistent-outcomes-in-home-surveillance","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2024\/09\/19\/study-ai-could-lead-to-inconsistent-outcomes-in-home-surveillance\/","title":{"rendered":"Study: AI could lead to inconsistent outcomes in home surveillance"},"content":{"rendered":"<p>Author: Adam Zewe | MIT News<\/p>\n<div>\n<p>A new study from researchers at MIT and Penn State University reveals that if large language models were to be used in home surveillance, they could recommend calling the police even when surveillance videos show no criminal activity.<\/p>\n<p>In addition, the models the researchers studied were inconsistent in which videos they flagged for police intervention. For instance, a model might flag one video that shows a vehicle break-in but not flag another video that shows a similar activity. Models often disagreed with one another over whether to call the police for the same video.<\/p>\n<p>Furthermore, the researchers found that some models flagged videos for police intervention relatively less often in neighborhoods where most residents are white, controlling for other factors. This shows that the models exhibit inherent biases influenced by the demographics of a neighborhood, the researchers say.<\/p>\n<p>These results indicate that models are inconsistent in how they apply social norms to surveillance videos that portray similar activities. This phenomenon, which the researchers call norm inconsistency, makes it difficult to predict how models would behave in different contexts.<\/p>\n<p>\u201cThe move-fast, break-things modus operandi of deploying generative AI models everywhere, and particularly in high-stakes settings, deserves much more thought since it could be quite harmful,\u201d says co-senior author Ashia Wilson, the Lister Brothers Career Development Professor in the Department of Electrical Engineering and Computer Science and a principal investigator in the Laboratory for Information and Decision Systems (LIDS).<\/p>\n<p>Moreover, because researchers can\u2019t access the training data or inner workings of these proprietary AI models, they can\u2019t determine the root cause of norm inconsistency.<\/p>\n<p>While large language models (LLMs) may not be currently deployed in real surveillance settings, they are being used to make normative decisions in other high-stakes settings, such as health care, mortgage lending, and hiring. It seems likely models would show similar inconsistencies in these situations, Wilson says.<\/p>\n<p>\u201cThere is this implicit belief that these LLMs have learned, or can learn, some set of norms and values. Our work is showing that is not the case. Maybe all they are learning is arbitrary patterns or noise,\u201d says lead author Shomik Jain, a graduate student in the Institute for Data, Systems, and Society (IDSS).<\/p>\n<p>Wilson and Jain are joined on the paper by co-senior author Dana Calacci PhD \u201923, an assistant professor at the Penn State University College of Information Science and Technology. The research will be presented at the AAAI Conference on AI, Ethics, and Society.<\/p>\n<p><strong>\u201cA real, imminent, practical threat\u201d<\/strong><\/p>\n<p>The study grew out of a dataset containing thousands of Amazon Ring home surveillance videos, which Calacci built in 2020, while she was a graduate student in the MIT Media Lab. Ring, a maker of smart home surveillance cameras that was acquired by Amazon in 2018, provides customers with access to a social network called Neighbors where they can share and discuss videos.<\/p>\n<p>Calacci\u2019s prior research indicated that people sometimes use the platform to \u201cracially gatekeep\u201d a neighborhood by determining who does and does not belong there based on skin-tones of video subjects. She planned to train algorithms that automatically caption videos to study how people use the Neighbors platform, but at the time existing algorithms weren\u2019t good enough at captioning.<\/p>\n<p>The project pivoted with the explosion of LLMs.<\/p>\n<p>\u201cThere is a real, imminent, practical threat of someone using off-the-shelf generative AI models to look at videos, alert a homeowner, and automatically call law enforcement. We wanted to understand how risky that was,\u201d Calacci says.<\/p>\n<p>The researchers chose three LLMs \u2014 GPT-4, Gemini, and Claude \u2014 and showed them real videos posted to the Neighbors platform from Calacci\u2019s dataset. They asked the models two questions: \u201cIs a crime happening in the video?\u201d and \u201cWould the model recommend calling the police?\u201d<\/p>\n<p>They had humans annotate videos to identify whether it was day or night, the type of activity, and the gender and skin-tone of the subject. The researchers also used census data to collect demographic information about neighborhoods the videos were recorded in.<\/p>\n<p><strong>Inconsistent decisions<\/strong><\/p>\n<p>They found that all three models nearly always said no crime occurs in the videos, or gave an ambiguous response, even though 39 percent did show a crime.<\/p>\n<p>\u201cOur hypothesis is that the companies that develop these models have taken a conservative approach by restricting what the models can say,\u201d Jain says.<\/p>\n<p>But even though the models said most videos contained no crime, they recommend calling the police for between 20 and 45 percent of videos.<\/p>\n<p>When the researchers drilled down on the neighborhood demographic information, they saw that some models were less likely to recommend calling the police in majority-white neighborhoods, controlling for other factors.<\/p>\n<p>They found this surprising because the models were given no information on neighborhood demographics, and the videos only showed an area a few yards beyond a home\u2019s front door.<\/p>\n<p>In addition to asking the models about crime in the videos, the researchers also prompted them to offer reasons for why they made those choices. When they examined these data, they found that models were more likely to use terms like \u201cdelivery workers\u201d in majority white neighborhoods, but terms like \u201cburglary tools\u201d or \u201ccasing the property\u201d in neighborhoods with a higher proportion of residents of color.<\/p>\n<p>\u201cMaybe there is something about the background conditions of these videos that gives the models this implicit bias. It is hard to tell where these inconsistencies are coming from because there is not a lot of transparency into these models or the data they have been trained on,\u201d Jain says.<\/p>\n<p>The researchers were also surprised that skin tone of people in the videos did not play a significant role in whether a model recommended calling police. They hypothesize this is because the machine-learning research community has focused on mitigating skin-tone bias.<\/p>\n<p>\u201cBut it is hard to control for the innumerable number of biases you might find. It is almost like a game of whack-a-mole. You can mitigate one and another bias pops up somewhere else,\u201d Jain says.<\/p>\n<p>Many mitigation techniques require knowing the bias at the outset. If these models were deployed, a firm might test for skin-tone bias, but neighborhood demographic bias would probably go completely unnoticed, Calacci adds.<\/p>\n<p>\u201cWe have our own stereotypes of how models can be biased that firms test for before they deploy a model. Our results show that is not enough,\u201d she says.<\/p>\n<p>To that end, one project Calacci and her collaborators hope to work on is a system that makes it easier for people to identify and report AI biases and potential harms to firms and government agencies.<\/p>\n<p>The researchers also want to study how the normative judgements LLMs make in high-stakes situations compare to those humans would make, as well as the facts LLMs understand about these scenarios.<\/p>\n<p>This work was funded, in part, by the IDSS\u2019s <a href=\"https:\/\/idss.mit.edu\/research\/collaborations\/icsr\/\" target=\"_blank\" rel=\"noopener\">Initiative on Combating Systemic Racism<\/a>.<\/p>\n<\/div>\n<p><a href=\"https:\/\/news.mit.edu\/2024\/study-ai-inconsistent-outcomes-home-surveillance-0919\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Adam Zewe | MIT News A new study from researchers at MIT and Penn State University reveals that if large language models were to [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2024\/09\/19\/study-ai-could-lead-to-inconsistent-outcomes-in-home-surveillance\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":472,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/7600"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=7600"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/7600\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/474"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=7600"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=7600"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=7600"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}