{"id":8131,"date":"2025-05-02T06:28:18","date_gmt":"2025-05-02T06:28:18","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2025\/05\/02\/making-ai-models-more-trustworthy-for-high-stakes-settings\/"},"modified":"2025-05-02T06:28:18","modified_gmt":"2025-05-02T06:28:18","slug":"making-ai-models-more-trustworthy-for-high-stakes-settings","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2025\/05\/02\/making-ai-models-more-trustworthy-for-high-stakes-settings\/","title":{"rendered":"Making AI models more trustworthy for high-stakes settings"},"content":{"rendered":"<p>Author: Adam Zewe | MIT News<\/p>\n<div>\n<p>The ambiguity in medical imaging can present major challenges for clinicians who are trying to identify disease. For instance, in a chest X-ray, pleural effusion, an abnormal buildup of fluid in the lungs, can look very much like pulmonary infiltrates, which are accumulations of pus or blood.<\/p>\n<p>An artificial intelligence model could assist the clinician in X-ray analysis by helping to identify subtle details and boosting the efficiency of the diagnosis process. But because so many possible conditions could be present in one image, the clinician would likely want to consider a set of possibilities, rather than only having one AI prediction to evaluate.<\/p>\n<p>One promising way to produce a set of possibilities, called conformal classification, is convenient because it can be readily implemented on top of an existing machine-learning model. However, it can produce sets that are impractically large.\u00a0<\/p>\n<p>MIT researchers have now developed a simple and effective improvement that can reduce the size of prediction sets by up to 30 percent while also making predictions more reliable.<\/p>\n<p>Having a smaller prediction set may help a clinician zero in on the right diagnosis more efficiently, which could improve and streamline treatment for patients. This method could be useful across a range of classification tasks \u2014 say, for identifying the species of an animal in an image from a wildlife park \u2014 as it provides a smaller but more accurate set of options.<\/p>\n<p>\u201cWith fewer classes to consider, the sets of predictions are naturally more informative in that you are choosing between fewer options. In a sense, you are not really sacrificing anything in terms of accuracy for something that is more informative,\u201d says Divya Shanmugam PhD \u201924, a postdoc at Cornell Tech who conducted this research while she was an MIT graduate student.<\/p>\n<p>Shanmugam is joined on the <a href=\"https:\/\/dmshanmugam.github.io\/pdfs\/CVPR_2025_TTA_CP.pdf\" target=\"_blank\" rel=\"noopener\">paper<\/a> by Helen Lu \u201924; Swami Sankaranarayanan, a former MIT postdoc who is now a research scientist at Lilia Biosciences; and senior author John Guttag, the Dugald C. Jackson Professor of Computer Science and Electrical Engineering at MIT and a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). The research will be presented at the Conference on Computer Vision and Pattern Recognition in June.<\/p>\n<p><strong>Prediction guarantees<\/strong><\/p>\n<p>AI assistants deployed for high-stakes tasks, like classifying diseases in medical images, are typically designed to produce a probability score along with each prediction so a user can gauge the model\u2019s confidence. For instance, a model might predict that there is a 20 percent chance an image corresponds to a particular diagnosis, like pleurisy.<\/p>\n<p>But it is difficult to trust a model\u2019s predicted confidence because much prior research has shown that these probabilities can be inaccurate. With conformal classification, the model\u2019s prediction is replaced by a set of the most probable diagnoses along with a guarantee that the correct diagnosis is somewhere in the set.<\/p>\n<p>But the inherent uncertainty in AI predictions often causes the model to output sets that are far too large to be useful.<\/p>\n<p>For instance, if a model is classifying an animal in an image as one of 10,000 potential species, it might output a set of 200 predictions so it can offer a strong guarantee.<\/p>\n<p>\u201cThat is quite a few classes for someone to sift through to figure out what the right class is,\u201d Shanmugam says.<\/p>\n<p>The technique can also be unreliable because tiny changes to inputs, like slightly rotating an image, can yield entirely different sets of predictions.<\/p>\n<p>To make conformal classification more useful, the researchers applied a technique developed to improve the accuracy of computer vision models called test-time augmentation (TTA).<\/p>\n<p>TTA creates multiple augmentations of a single image in a dataset, perhaps by cropping the image, flipping it, zooming in, etc. Then it applies a computer vision model to each version of the same image and aggregates its predictions.<\/p>\n<p>\u201cIn this way, you get multiple predictions from a single example. Aggregating predictions in this way improves predictions in terms of accuracy and robustness,\u201d Shanmugam explains.<\/p>\n<p><strong>Maximizing accuracy<\/strong><\/p>\n<p>To apply TTA, the researchers hold out some labeled image data used for the conformal classification process. They learn to aggregate the augmentations on these held-out data, automatically augmenting the images in a way that maximizes the accuracy of the underlying model\u2019s predictions.<\/p>\n<p>Then they run conformal classification on the model\u2019s new, TTA-transformed predictions. The conformal classifier outputs a smaller set of probable predictions for the same confidence guarantee.<\/p>\n<p>\u201cCombining test-time augmentation with conformal prediction is simple to implement, effective in practice, and requires no model retraining,\u201d Shanmugam says.<\/p>\n<p>Compared to prior work in conformal prediction across several standard image classification benchmarks, their TTA-augmented method reduced prediction set sizes across experiments, from 10 to 30 percent.<\/p>\n<p>Importantly, the technique achieves this reduction in prediction set size while maintaining the probability guarantee.<\/p>\n<p>The researchers also found that, even though they are sacrificing some labeled data that would normally be used for the conformal classification procedure, TTA boosts accuracy enough to outweigh the cost of losing those data.<\/p>\n<p>\u201cIt raises interesting questions about how we used labeled data after model training. The allocation of labeled data between different post-training steps is an important direction for future work,\u201d Shanmugam says.<\/p>\n<p>In the future, the researchers want to validate the effectiveness of such an approach in the context of models that classify text instead of images. To further improve the work, the researchers are also considering ways to reduce the amount of computation required for TTA.<\/p>\n<p>This research is funded, in part, by the Wistrom Corporation.<\/p>\n<\/div>\n<p><a href=\"https:\/\/news.mit.edu\/2025\/making-ai-models-more-trustworthy-high-stakes-settings-0501\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Adam Zewe | MIT News The ambiguity in medical imaging can present major challenges for clinicians who are trying to identify disease. For instance, [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2025\/05\/02\/making-ai-models-more-trustworthy-for-high-stakes-settings\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":464,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/8131"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=8131"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/8131\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/466"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=8131"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=8131"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=8131"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}