{"id":5405,"date":"2022-02-08T17:00:00","date_gmt":"2022-02-08T17:00:00","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2022\/02\/08\/ask-a-techspert-what-does-ai-do-when-it-doesnt-know\/"},"modified":"2022-02-08T17:00:00","modified_gmt":"2022-02-08T17:00:00","slug":"ask-a-techspert-what-does-ai-do-when-it-doesnt-know","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2022\/02\/08\/ask-a-techspert-what-does-ai-do-when-it-doesnt-know\/","title":{"rendered":"Ask a Techspert: What does AI do when it doesn\u2019t know?"},"content":{"rendered":"<p>Author: <\/p>\n<div>\n<div class=\"block-paragraph\">\n<div class=\"rich-text\">\n<p>As humans, we constantly learn from the world around us. We experience inputs that shape our knowledge \u2014 including the boundaries of both what we know and what we don\u2019t know.<\/p>\n<p>Many of today\u2019s machines also learn by example. However, these machines are typically trained on datasets and information that doesn\u2019t always include rare or out-of-the-ordinary examples that inevitably come up in real-life scenarios. What is an algorithm to do when faced with the unknown?<\/p>\n<p>I recently spoke with Abhijit Guha Roy, an engineer on the <a href=\"https:\/\/health.google\/health-research\/imaging-and-diagnostics\/\">Health AI<\/a> team, and Ian Kivlichan, an engineer on the <a href=\"https:\/\/jigsaw.google.com\/\">Jigsaw<\/a> team, to hear more about using AI in real-world scenarios and better understand the importance of training it to know when it doesn\u2019t know.<\/p>\n<p><b>Abhijit, tell me about your recent research in the dermatology space.<\/b><\/p>\n<p>We\u2019re applying deep learning to a number of areas in health, including in medical imaging where it can be used to aid in the identification of health conditions and diseases that might require treatment. In the dermatological field, we have shown that AI can be used to help identify possible skin issues and are in the process of advancing <a href=\"https:\/\/blog.google\/technology\/health\/ai-assists-doctors-interpreting-skin-conditions\/\">research<\/a> and products, including <a href=\"https:\/\/blog.google\/technology\/health\/ai-dermatology-preview-io-2021\/\">DermAssist<\/a>, that can support both clinicians and people like you and me.<\/p>\n<p>In these real-world settings, the algorithm might come up against something it&#8217;s never seen before. Rare conditions, while individually infrequent, might not be so rare in aggregate. These so-called \u201cout-of-distribution\u201d examples are a common problem for AI systems which can perform less well when it\u2019s exposed to things they haven\u2019t seen before in its training.<\/p>\n<p><b>Can you explain what \u201cout-distribution\u201d means for AI?<\/b><\/p>\n<p>Most traditional machine learning examples that are used to train AI deal with fairly unsubtle \u2014 or obvious \u2014 changes. For example, if an algorithm that is trained to identify cats and dogs comes across a car, then it can typically detect that the car \u2014 which is an \u201cout-of-distribution\u201d example \u2014 is an outlier. Building an AI system that can recognize the presence of something it hasn\u2019t seen before in training is called \u201cout-of-distribution detection,\u201d and is an active and promising field of AI research.<\/p>\n<p><b>Okay, let\u2019s go back to how this applies to AI in medical settings.<\/b><\/p>\n<p>Going back to our research in the dermatology space, the differences between skin conditions can be much more subtle than recognizing a car from a dog or a cat, even more subtle than recognizing a previously unseen \u201cpick-up truck\u201d from a \u201ctruck\u201d. As such, the out-of-distribution detection task in medical AI demands even more of our focused attention.<\/p>\n<p>This is where <a href=\"https:\/\/ai.googleblog.com\/2022\/01\/does-your-medical-image-classifier-know.html\">our latest research<\/a> comes in. We trained our algorithm to recognize even the most subtle of outliers (a so-called \u201cnear-out of distribution\u201d detection task). Then, instead of the model inaccurately guessing, it can take a safer course of action \u2014 like deferring to human experts.<\/p>\n<p><b>Ian, you\u2019re working on another area where AI needs to know when it doesn\u2019t know something. What\u2019s that?<\/b><\/p>\n<p>The field of content moderation. Our team at <a href=\"http:\/\/jigsaw.google.com\/\">Jigsaw<\/a> used AI to build a free tool called <a href=\"http:\/\/perspectiveapi.com\/\">Perspective<\/a> that scores comments according to how likely they are to be considered toxic by readers. Our AI algorithms help identify toxic language and online harassment at scale so that human content moderators can make better decisions for their online communities. A range of online platforms use Perspective more than 600 million times a day to reduce toxicity and the human time required to moderate content.<\/p>\n<p>In the real world, online conversations \u2014 both the things people say and even the ways they say them \u2014 are continually changing. For example, two years ago, nobody would have understood the phrase \u201cnon-fungible token (NFT).\u201d Our language is always evolving, which means a tool like Perspective doesn&#8217;t just need to identify potentially toxic or harassing comments, it also needs to \u201cknow when it doesn\u2019t know,\u201d and then defer to human moderators when it encounters comments very different from anything it has encountered before.<\/p>\n<p>In <a href=\"https:\/\/aclanthology.org\/2021.woah-1.5.pdf\">our recent research<\/a>, we trained Perspective to identify comments it was uncertain about and flag them for separate human review. By prioritizing these comments, human moderators can correct more than 80% of the mistakes the AI might otherwise have made.<\/p>\n<p><b>What connects these two examples?<\/b><\/p>\n<p>We have more in common with the dermatology problem than you&#8217;d expect at first glance \u2014 even though the problems we try to solve are so different.<\/p>\n<p>Building AI that knows when it doesn\u2019t know something means you can prevent certain errors that might have unintended consequences. In both cases, the safest course of action for the algorithm entails deferring to human experts rather than trying to make a decision that could lead to potentially negative effects downstream.<\/p>\n<p>There are some fields where this isn\u2019t as important and others where it\u2019s critical. You might not care if an automated vegetable sorter incorrectly sorts a purple carrot after being trained on orange carrots, but you would definitely care if an algorithm didn\u2019t know what to do about an abnormal shadow on an X-ray that a doctor might recognize as an unexpected cancer.<\/p>\n<p><b>How is AI uncertainty related to AI safety?<\/b><\/p>\n<p>Most of us are familiar with safety protocols in the workplace. In safety-critical industries like aviation or medicine, protocols like \u201csafety checklists\u201d are routine and very important in order to prevent harm to both the workers and the people they serve.<\/p>\n<p>It\u2019s important that we also think about safety protocols when it comes to machines and algorithms, especially when they are integrated into our daily workflow and aid in decision-making or triaging that can have a downstream impact.<\/p>\n<p>Teaching algorithms to refrain from guessing in unfamiliar scenarios and to ask for help from human experts falls within these protocols, and is one of the ways we can reduce harm and build trust in our systems. This is something Google is committed to, as outlined in its <a href=\"https:\/\/ai.google\/principles\/\">AI Principles.<\/a><\/p>\n<\/div>\n<\/div>\n<\/div>\n<p><a href=\"https:\/\/blog.google\/technology\/health\/ask-a-techspert-what-does-ai-do-when-it-doesnt-know\/\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: As humans, we constantly learn from the world around us. We experience inputs that shape our knowledge \u2014 including the boundaries of both what [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2022\/02\/08\/ask-a-techspert-what-does-ai-do-when-it-doesnt-know\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":459,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/5405"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=5405"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/5405\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/458"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=5405"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=5405"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=5405"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}