{"id":6250,"date":"2023-01-23T19:40:00","date_gmt":"2023-01-23T19:40:00","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2023\/01\/23\/putting-clear-bounds-on-uncertainty\/"},"modified":"2023-01-23T19:40:00","modified_gmt":"2023-01-23T19:40:00","slug":"putting-clear-bounds-on-uncertainty","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2023\/01\/23\/putting-clear-bounds-on-uncertainty\/","title":{"rendered":"Putting clear bounds on uncertainty"},"content":{"rendered":"<p>Author: Steve Nadis | MIT CSAIL<\/p>\n<div>\n<p>In science and technology, there has been a long and steady drive toward improving the accuracy of measurements of all kinds, along with parallel efforts to enhance the resolution of images. An accompanying goal is to reduce the uncertainty in the estimates that can be made, and the inferences drawn, from the data (visual or otherwise) that have been collected. Yet uncertainty can never be wholly eliminated. And since we have to live with it, at least to some extent, there is much to be gained by quantifying the uncertainty as precisely as possible.<\/p>\n<p>Expressed in other terms, we\u2019d like to know just how uncertain our uncertainty is.<\/p>\n<p>That issue was taken up in <a href=\"https:\/\/arxiv.org\/abs\/2207.10074\">a new study<\/a>, led by Swami Sankaranarayanan, a postdoc at MIT\u2019s Computer Science and Artificial Intelligence Laboratory (CSAIL), and his co-authors \u2014 Anastasios Angelopoulos and Stephen Bates of the University of California at Berkeley; Yaniv Romano of Technion, the Israel Institute of Technology; and Phillip Isola, an associate professor of electrical engineering and computer science at MIT. These researchers succeeded not only in obtaining accurate measures of uncertainty, they also found a way to display uncertainty in a manner the average person could grasp.<\/p>\n<p>Their paper,<strong> <\/strong>which was presented in December at the Neural Information Processing Systems Conference in New Orleans, relates to computer vision \u2014 a field of artificial intelligence that involves training computers to glean information from digital images. The focus of this research is on images that are partially smudged or corrupted (due to missing pixels),<strong> <\/strong>as well as on methods \u2014 computer algorithms, in particular \u2014 that are designed to uncover the part of the signal that is marred or otherwise concealed. An algorithm of this sort, Sankaranarayanan explains, \u201ctakes the blurred image as the input and gives you a clean image as the output\u201d \u2014 a process that typically occurs in a couple of steps.<\/p>\n<p>First, there is an encoder, a kind of neural network specifically trained by the researchers for the task of de-blurring fuzzy images. The encoder takes a distorted image and, from that, creates an abstract (or \u201clatent\u201d) representation of a clean image in a form \u2014 consisting of a list of numbers \u2014 that is intelligible to a computer but would not make sense to most humans. The next step is a decoder, of which there are a couple of types, that are again usually neural networks. Sankaranarayanan and his colleagues worked with a kind of decoder called a \u201cgenerative\u201d model. In particular, they used an off-the-shelf version called StyleGAN, which takes the numbers from the encoded representation (of a cat, for instance) as its input and then constructs a complete, cleaned-up image (of that particular cat). So the entire process, including the encoding and decoding stages, yields a crisp picture from an originally muddied rendering.<\/p>\n<p>But how much faith can someone place in the accuracy of the resultant image? And, as addressed in the December 2022 paper, what is the best way to represent the uncertainty in that image? The standard approach is to create a \u201csaliency map,\u201d which ascribes a probability value \u2014 somewhere between 0 and 1 \u2014 to indicate the confidence the model has in the correctness of every pixel, taken one at a time. This strategy has a drawback, according to Sankaranarayanan, \u201cbecause the prediction is performed independently for each pixel. But meaningful objects occur within groups of pixels, not within an individual pixel,\u201d he adds, which is why he and his colleagues are proposing an entirely different way of assessing uncertainty.<\/p>\n<p>Their approach is centered around the \u201csemantic attributes\u201d of an image \u2014 groups of pixels that, when taken together, have meaning, making up a human face, for example, or a dog, or some other recognizable thing. The objective, Sankaranarayanan maintains, \u201cis to estimate uncertainty in a way that relates to the groupings of pixels that humans can readily interpret.\u201d<\/p>\n<p>Whereas the standard method might yield a single image, constituting the \u201cbest guess\u201d as to what the true picture should be, the uncertainty in that representation is normally hard to discern. The new paper argues that for use in the real world, uncertainty should be presented in a way that holds meaning for people who are not experts in machine learning. Rather than producing a single image, the authors have devised a procedure for generating a range of images \u2014 each of which might be correct. Moreover, they can set precise bounds on the range, or interval, and provide a probabilistic guarantee that the true depiction lies somewhere within that range. A narrower range can be provided if the user is comfortable with, say, 90 percent certitude, and a narrower range still if more risk is acceptable.<\/p>\n<p>The authors believe their paper puts forth the first algorithm, designed for a generative model, which can establish uncertainty intervals that relate to meaningful (semantically-interpretable) features of an image and come with \u201ca formal statistical guarantee.\u201d<strong> <\/strong>While that is an important milestone, Sankaranarayanan considers it merely a step toward \u201cthe ultimate goal. So far, we have been able to do this for simple things, like restoring images of human faces or animals, but we want to extend this approach into more critical domains, such as medical imaging, where our \u2018statistical guarantee\u2019 could be especially important.\u201d<\/p>\n<p>Suppose that the film, or radiograph, of a chest X-ray is blurred, he adds, \u201cand you want to reconstruct the image. If you are given a range of images, you want to know that the true image is contained within that range, so you are not missing anything critical\u201d \u2014 information that might reveal whether or not a patient has lung cancer or pneumonia. In fact, Sankaranarayanan and his colleagues have already begun working with a radiologist to see if their algorithm for predicting pneumonia could be useful in a clinical setting.<\/p>\n<p>Their work may also have relevance in the law enforcement field, he says. \u201cThe picture from a surveillance camera may be blurry, and you want to enhance that. Models for doing that already exist, but it is not easy to gauge the uncertainty. And you don\u2019t want to make a mistake in a life-or-death situation.\u201d The tools that he and his colleagues are developing could help identify a guilty person and help exonerate an innocent one as well.<\/p>\n<p>Much of what we do and many of the things happening in the world around us are shrouded in uncertainty, Sankaranarayanan notes. Therefore, gaining a firmer grasp of that uncertainty could help us in countless ways. For one thing, it can tell us more about exactly what it is we do not know.<\/p>\n<p>Angelopoulos was supported by the National Science Foundation. Bates was supported by the Foundations of Data Science Institute and the Simons Institute. Romano was supported by the Israel Science Foundation and by a Career Advancement Fellowship from Technion. Sankaranarayanan&#8217;s and Isola\u2019s research for this project was sponsored by the U.S. Air Force Research Laboratory and the U.S. Air Force Artificial Intelligence Accelerator and was accomplished under Cooperative Agreement Number FA8750-19-2- 1000. MIT SuperCloud and the Lincoln Laboratory Supercomputing Center also provided computing resources that contributed to the results reported in this work.<\/p>\n<\/div>\n<p><a href=\"https:\/\/news.mit.edu\/2023\/putting-clear-bounds-uncertainty-0123\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Steve Nadis | MIT CSAIL In science and technology, there has been a long and steady drive toward improving the accuracy of measurements of [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2023\/01\/23\/putting-clear-bounds-on-uncertainty\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":469,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/6250"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=6250"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/6250\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/468"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=6250"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=6250"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=6250"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}