{"id":6983,"date":"2023-12-08T05:00:00","date_gmt":"2023-12-08T05:00:00","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2023\/12\/08\/automated-system-teaches-users-when-to-collaborate-with-an-ai-assistant\/"},"modified":"2023-12-08T05:00:00","modified_gmt":"2023-12-08T05:00:00","slug":"automated-system-teaches-users-when-to-collaborate-with-an-ai-assistant","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2023\/12\/08\/automated-system-teaches-users-when-to-collaborate-with-an-ai-assistant\/","title":{"rendered":"Automated system teaches users when to collaborate with an AI assistant"},"content":{"rendered":"<p>Author: Adam Zewe | MIT News<\/p>\n<div>\n<p>Artificial intelligence models that pick out patterns in images can often do so better than human eyes \u2014 but not always. If a radiologist is using an AI model to help her determine whether a patient\u2019s X-rays show signs of pneumonia, when should she trust the model\u2019s advice and when should she ignore it?<\/p>\n<p>A customized onboarding process could help this radiologist answer that question, according to researchers at MIT and the MIT-IBM Watson AI Lab. They designed a system that teaches a user when to collaborate with an AI assistant.<\/p>\n<p>In this case, the training method might find situations where the radiologist trusts the model\u2019s advice \u2014 except she shouldn\u2019t because the model is wrong. The system automatically learns rules for how she should collaborate with the AI, and describes them with natural language.<\/p>\n<p>During onboarding, the radiologist practices collaborating with the AI using training exercises based on these rules, receiving feedback about her performance and the AI\u2019s performance.<\/p>\n<p>The researchers found that this onboarding procedure led to about a 5 percent improvement in accuracy when humans and AI collaborated on an image prediction task. Their results also show that just telling the user when to trust the AI, without training, led to worse performance.<\/p>\n<p>Importantly, the researchers\u2019 system is fully automated, so it learns to create the onboarding process based on data from the human and AI performing a specific task. It can also adapt to different tasks, so it can be scaled up and used in many situations where humans and AI models work together, such as in social media content moderation, writing, and programming.<\/p>\n<p>\u201cSo often, people are given these AI tools to use without any training to help them figure out when it is going to be helpful. That\u2019s not what we do with nearly every other tool that people use \u2014 there is almost always some kind of tutorial that comes with it. But for AI, this seems to be missing. We are trying to tackle this problem from a methodological and behavioral perspective,\u201d says Hussein Mozannar, a graduate student in the Social and Engineering Systems doctoral program within the Institute for Data, Systems, and Society (IDSS) and lead author of <a href=\"https:\/\/arxiv.org\/pdf\/2311.01007.pdf\" target=\"_blank\" rel=\"noopener\">a paper about this training process<\/a>.<\/p>\n<p>The researchers envision that such onboarding will be a crucial part of training for medical professionals.<\/p>\n<p>\u201cOne could imagine, for example, that doctors making treatment decisions with the help of AI will first have to do training similar to what we propose. We may need to rethink\u00a0everything from continuing medical education to the way clinical trials are designed,\u201d says senior author David Sontag, a professor of EECS, a member of the MIT-IBM Watson AI Lab and the MIT Jameel Clinic, and the leader of the Clinical Machine Learning Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL).<\/p>\n<p>Mozannar, who is also a researcher with the Clinical Machine Learning Group, is joined on the paper by Jimin J. Lee, an undergraduate in electrical engineering and computer science; Dennis Wei, a senior research scientist at IBM Research; and Prasanna Sattigeri and Subhro Das, research staff members at the MIT-IBM Watson AI Lab. The paper will be presented at the Conference on Neural Information Processing Systems.<\/p>\n<p><strong>Training that evolves<\/strong><\/p>\n<p>Existing onboarding methods for human-AI collaboration are often composed of training materials produced by human experts for specific use cases, making them difficult to scale up. Some related techniques rely on explanations, where the AI tells the user its confidence in each decision, but research has shown that explanations are rarely helpful, Mozannar says.<\/p>\n<p>\u201cThe AI model\u2019s capabilities are constantly evolving, so the use cases where the human could potentially benefit from it are growing over time. At the same time, the user\u2019s perception of the model continues changing. So, we need a training procedure that also evolves over time,\u201d he adds.<\/p>\n<p>To accomplish this, their onboarding method is automatically learned from data. It is built from a dataset that contains many instances of a task, such as detecting the presence of a traffic light from a blurry image.<\/p>\n<p>The system\u2019s first step is to collect data on the human and AI performing this task. In this case, the human would try to predict, with the help of AI, whether blurry images contain traffic lights.<\/p>\n<p>The system embeds these data points onto a latent space, which is a representation of data in which similar data points are closer together. It uses an algorithm to discover regions of this space where the human collaborates incorrectly with the AI. These regions capture instances where the human trusted the AI\u2019s prediction but the prediction was wrong, and vice versa.<\/p>\n<p>Perhaps the human mistakenly trusts the AI when images show a highway at night.<\/p>\n<p>After discovering the regions, a second algorithm utilizes a large language model to describe each region as a rule, using natural language. The algorithm iteratively fine-tunes that rule by finding contrasting examples. It might describe this region as \u201cignore AI when it is a highway during the night.\u201d<\/p>\n<p>These rules are used to build training exercises. The onboarding system shows an example to the human, in this case a blurry highway scene at night, as well as the AI\u2019s prediction, and asks the user if the image shows traffic lights. The user can answer yes, no, or use the AI\u2019s prediction.<\/p>\n<p>If the human is wrong, they are shown the correct answer and performance statistics for the human and AI on these instances of the task. The system does this for each region, and at the end of the training process, repeats the exercises the human got wrong.<\/p>\n<p>\u201cAfter that, the human has learned something about these regions that we hope they will take away in the future to make more accurate predictions,\u201d Mozannar says.<\/p>\n<p><strong>Onboarding boosts accuracy<\/strong><\/p>\n<p>The researchers tested this system with users on two tasks \u2014 detecting traffic lights in blurry images and answering multiple choice questions from many domains (such as biology, philosophy, computer science, etc.).<\/p>\n<p>They first showed users a card with information about the AI model, how it was trained, and a breakdown of its performance on broad categories. Users were split into five groups: Some were only shown the card, some went through the researchers\u2019 onboarding procedure, some went through a baseline onboarding procedure, some went through the researchers\u2019 onboarding procedure and were given recommendations of when they should or should not trust the AI, and others were only given the recommendations.<\/p>\n<p>Only the researchers\u2019 onboarding procedure without recommendations improved users\u2019 accuracy significantly, boosting their performance on the traffic light prediction task by about 5 percent without slowing them down. However, onboarding was not as effective for the question-answering task. The researchers believe this is because the AI model, ChatGPT, provided explanations with each answer that convey whether it should be trusted.<\/p>\n<p>But providing recommendations without onboarding had the opposite effect \u2014 users not only performed worse, they took more time to make predictions.<\/p>\n<p>\u201cWhen you only give someone recommendations, it seems like they get confused and don\u2019t know what to do. It derails their process. People also don\u2019t like being told what to do, so that is a factor as well,\u201d Mozannar says.<\/p>\n<p>Providing recommendations alone could harm the user if those recommendations are wrong, he adds. With onboarding, on the other hand, the biggest limitation is the amount of available data. If there aren\u2019t enough data, the onboarding stage won\u2019t be as effective, he says.<\/p>\n<p>In the future, he and his collaborators want to conduct larger studies to evaluate the short- and long-term effects of onboarding. They also want to leverage unlabeled data for the onboarding process, and find methods to effectively reduce the number of regions without omitting important examples.<\/p>\n<p>\u201cPeople are adopting AI systems willy-nilly, and indeed AI offers great potential, but these AI agents still sometimes makes mistakes. Thus, it\u2019s crucial for AI developers to devise methods that help humans know when it\u2019s safe to rely on the AI\u2019s suggestions,\u201d says Dan Weld, professor emeritus at the Paul G. Allen School of Computer Science and Engineering at the University of Washington, who was not involved with this research. \u201cMozannar et al. have created an innovative method for identifying situations where the AI is trustworthy, and (importantly) to describe them to people in a way that leads to better human-AI team interactions.\u201d<\/p>\n<p>This work is funded, in part, by the MIT-IBM Watson AI Lab.<\/p>\n<\/div>\n<p><a href=\"https:\/\/news.mit.edu\/2023\/automated-system-teaches-collaborate-ai-assistant-1208\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Adam Zewe | MIT News Artificial intelligence models that pick out patterns in images can often do so better than human eyes \u2014 but [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2023\/12\/08\/automated-system-teaches-users-when-to-collaborate-with-an-ai-assistant\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":469,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/6983"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=6983"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/6983\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/471"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=6983"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=6983"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=6983"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}