{"id":2862,"date":"2019-11-26T18:56:05","date_gmt":"2019-11-26T18:56:05","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2019\/11\/26\/producing-better-guides-for-medical-image-analysis\/"},"modified":"2019-11-26T18:56:05","modified_gmt":"2019-11-26T18:56:05","slug":"producing-better-guides-for-medical-image-analysis","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2019\/11\/26\/producing-better-guides-for-medical-image-analysis\/","title":{"rendered":"Producing better guides for medical-image analysis"},"content":{"rendered":"<p>Author: Rob Matheson | MIT News Office<\/p>\n<div>\n<p>MIT researchers have devised a method that accelerates the process for creating and customizing templates used in medical-image analysis, to guide disease diagnosis. \u00a0<\/p>\n<p>One use of medical image analysis is to crunch datasets of patients\u2019 medical images and capture structural relationships that may indicate the progression of diseases. In many cases, analysis requires use of a common image template, called an \u201catlas,\u201d that\u2019s an average representation of a given patient population. Atlases serve as a reference for comparison, for example to identify clinically significant changes in brain structures over time.<\/p>\n<p>Building a template is a time-consuming, laborious process, often taking days or weeks to generate, especially when using 3D brain scans. To save time, researchers often download publicly available atlases previously generated by research groups. But those don\u2019t fully capture the diversity of individual datasets or specific subpopulations, such as those with new diseases or from young children. Ultimately, the atlas can\u2019t be smoothly mapped onto outlier images, producing poor results.<\/p>\n<p>In a paper being presented at the Conference on Neural Information Processing Systems in December, the researchers describe an automated machine-learning model that generates \u201cconditional\u201d atlases based on specific patient attributes, such as age, sex, and disease. By leveraging shared information from across an entire dataset, the model can also synthesize atlases from patient subpopulations that may be completely missing in the dataset.<\/p>\n<p>\u201cThe world needs more atlases,\u201d says first author Adrian Dalca, a former postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and now a faculty member in radiology at Harvard Medical School and Massachusetts General Hospital. \u201cAtlases are central to many medical image analyses. This method can build a lot more of them and build conditional ones as well.\u201d<\/p>\n<p>Joining Dalca on the paper are Marianne Rakic, a visiting researcher in CSAIL; John Guttag, the Dugald C. Jackson Professor of Computer Science and Electrical Engineering and head of CSAIL\u2019s Data Driven Inference Group; and Mert R. Sabuncu of Cornell University.<\/p>\n<p><strong>Simultaneous alignment and atlases<\/strong><\/p>\n<p>Traditional atlas-building methods run lengthy, iterative optimization processes on all images in a dataset. They align, say, all 3D brain scans to an initial (often blurry) atlas, and compute a new average image from the aligned scans. They repeat this iterative process for all images. This computes a final atlas that minimizes the extent to which all scans in the dataset must deform to match the atlas. Doing this process for patient subpopulations can be complex and imprecise if there isn\u2019t enough data available.<\/p>\n<p>Mapping an atlas to a new scan generates a \u201cdeformation field,\u201d which characterizes the differences between the two images. This captures structural variations, which can then be further analyzed. In brain scans, for instance, structural variations can be due to tissue degeneration at different stages of a disease.<\/p>\n<p>In previous work, Dalca and other researchers developed a neural network to rapidly align these images. In part, that helped speed up the traditional atlas-building process. \u201cWe said, \u2018Why can\u2019t we build conditional atlases while learning to align images at the same time?\u2019\u201d Dalca says.<\/p>\n<p>To do so, the researchers combined two neural networks: One network automatically learns an atlas at each iteration, and another \u2014 adapted from the previous research \u2014 simultaneously aligns that atlas to images in a dataset.<\/p>\n<p>In training, the joint network is fed a random image from a dataset encoded with desired patient attributes. From that, it estimates an attribute-conditional atlas. The second network aligns the estimated atlas with the input image, and generates a deformation field.<\/p>\n<p>The deformation field generated for each image pair is used to train a \u201closs function,\u201d a component of machine-learning models that helps minimize deviations from a given value. In this case, the function specifically learns to minimize distances between the learned atlas and each image. The network continuously refines the atlas to smoothly align to any given image across the dataset.<\/p>\n<div class=\"cms-placeholder-content-video\"><\/div>\n<p><strong>On-demand atlases<\/strong><\/p>\n<p>The end result is a function that\u2019s learned how specific attributes, such as age, correlate to structural variations across all images in a dataset. By plugging new patient attributes into the function, it leverages all learned information across the dataset to synthesize an on-demand atlas \u2014 even if that attribute data is missing or scarce in the dataset.<\/p>\n<p>Say someone wants a brain scan atlas for a 45-year-old female patient from a dataset with information from patients aged 30 to 90, but with little data for women aged 40 to 50. The function will analyze patterns of how the brain changes between the ages of 30 to 90 and incorporate what little data exists for that age and sex. Then, it will produce the most representative atlas for females of the desired age. In their paper, the researchers verified the function by generating conditional templates for various age groups from 15 to 90.<\/p>\n<p>The researchers hope clinicians can use the model to build their own atlases quickly from their own, potentially small datasets. Dalca is now collaborating with researchers at Massachusetts General Hospital, for instance, to harness a dataset of pediatric brain scans to generate conditional atlases for younger children, which are hard to come by.<\/p>\n<p>A big dream is to build one function that can generate conditional atlases for any subpopulation, spanning birth to 90 years old. Researchers could log into a webpage, input an age, sex, diseases, and other parameters, and get an on-demand conditional atlas. \u201cThat would be wonderful, because everyone can refer to this one function as a single universal atlas reference,\u201d Dalca says.<\/p>\n<p>Another potential application beyond medical imaging is athletic training. Someone could train the function to generate an atlas for, say, a tennis player\u2019s serve motion. The player could then compare new serves against the atlas to see exactly where they kept proper form or where things went wrong.<\/p>\n<p>\u201cIf you watch sports, it\u2019s usually commenters saying they noticed if someone\u2019s form was off from one time compared to another,\u201d Dalca says. \u201cBut you can imagine that it could be much more quantitative than that.\u201d<\/p>\n<\/div>\n<p><a href=\"http:\/\/news.mit.edu\/2019\/ai-model-atlas-patient-brain-analysis-1126\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Rob Matheson | MIT News Office MIT researchers have devised a method that accelerates the process for creating and customizing templates used in medical-image [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2019\/11\/26\/producing-better-guides-for-medical-image-analysis\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":475,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/2862"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=2862"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/2862\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/473"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=2862"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=2862"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=2862"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}