{"id":5052,"date":"2021-09-27T13:00:00","date_gmt":"2021-09-27T13:00:00","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2021\/09\/27\/using-ai-and-old-reports-to-understand-new-medical-images\/"},"modified":"2021-09-27T13:00:00","modified_gmt":"2021-09-27T13:00:00","slug":"using-ai-and-old-reports-to-understand-new-medical-images","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2021\/09\/27\/using-ai-and-old-reports-to-understand-new-medical-images\/","title":{"rendered":"Using AI and old reports to understand new medical images"},"content":{"rendered":"<p>Author: Steve Nadis | MIT CSAIL<\/p>\n<div>\n<p>Getting a quick and accurate reading of an X-ray or some other medical images can be vital to a patient\u2019s health and might even save a life. Obtaining such an assessment depends on the availability of a skilled radiologist and, consequently, a rapid response is not always possible. For that reason, says Ruizhi \u201cRay\u201d Liao, a postdoc and a recent PhD graduate at MIT\u2019s Computer Science and Artificial Intelligence Laboratory (CSAIL), \u201cwe want to train machines that are capable of reproducing what radiologists do every day.\u201d Liao is first author of a new paper, written with other researchers at MIT and Boston-area hospitals, that is being presented this fall at MICCAI 2021, an international conference on medical image computing.<\/p>\n<p>Although the idea of utilizing computers to interpret images is not new, the MIT-led group is drawing on an underused resource \u2014 the vast body of radiology reports that accompany medical images, written by radiologists in routine clinical practice \u2014 to improve the interpretive abilities of machine learning algorithms. The team is also utilizing a concept from information theory called mutual information \u2014 a statistical measure of the interdependence of two different variables \u2014 in order to boost the effectiveness of their approach.<\/p>\n<p>Here\u2019s how it works: First, a neural network is trained to determine the extent of a disease, such as pulmonary edema, by being presented with numerous X-ray images of patients\u2019 lungs, along with a doctor\u2019s rating of the severity of each case. That information is encapsulated within a collection of numbers. A separate neural network does the same for text, representing its information in a different collection of numbers. A third neural network then integrates the information between images and text in a coordinated way that maximizes the mutual information between the two datasets. \u201cWhen the mutual information between images and text is high, that means that images are highly predictive of the text and the text is highly predictive of the images,\u201d explains MIT Professor Polina Golland, a principal investigator at CSAIL.<\/p>\n<p>Liao, Golland, and their colleagues have introduced another innovation that confers several advantages: Rather than working from entire images and radiology reports, they break the reports down to individual sentences and the portions of those images that the sentences pertain to. Doing things this way, Golland says, \u201cestimates the severity of the disease more accurately than if you view the whole image and whole report. And because the model is examining smaller pieces of data, it can learn more readily and has more samples to train on.\u201d<\/p>\n<p>While Liao finds the computer science aspects of this project fascinating, a primary motivation for him is \u201cto develop technology that is clinically meaningful and applicable to the real world.\u201d<\/p>\n<p>To that end, a pilot program is currently underway at the Beth Israel Deaconess Medical Center to see how MIT\u2019s machine learning model could influence the way doctors managing heart failure patients make decisions, especially in an emergency room setting where speed is of the essence.<\/p>\n<p>The model could have very broad applicability, according to Golland. \u201cIt could be used for any kind of imagery and associated text \u2014 inside or outside the medical realm. This general approach, moreover, could be applied beyond images and text, which is exciting to think about.\u201d<\/p>\n<p>Liao wrote the paper alongside MIT CSAIL postdoc Daniel Moyer and Golland; Miriam Cha and Keegan Quigley at MIT Lincoln Laboratory; William M. Wells at Harvard Medical School and MIT CSAIL; and clinical collaborators Seth Berkowitz and Steven Horng at Beth Israel Deaconess Medical Center.<\/p>\n<p>The work was sponsored by the NIH NIBIB Neuroimaging Analysis Center, Wistron, MIT-IBM Watson AI Lab, MIT Deshpande Center for Technological Innovation, MIT Abdul Latif Jameel Clinic for Machine Learning in Health (J-Clinic), and MIT Lincoln Lab.<\/p>\n<\/div>\n<p><a href=\"https:\/\/news.mit.edu\/2021\/using-ai-and-old-reports-understand-new-medical-images-0927\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Steve Nadis | MIT CSAIL Getting a quick and accurate reading of an X-ray or some other medical images can be vital to a [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2021\/09\/27\/using-ai-and-old-reports-to-understand-new-medical-images\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":460,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/5052"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=5052"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/5052\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/458"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=5052"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=5052"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=5052"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}