{"id":2279,"date":"2019-06-19T13:38:14","date_gmt":"2019-06-19T13:38:14","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2019\/06\/19\/from-one-brain-scan-more-information-for-medical-artificial-intelligence\/"},"modified":"2019-06-19T13:38:14","modified_gmt":"2019-06-19T13:38:14","slug":"from-one-brain-scan-more-information-for-medical-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2019\/06\/19\/from-one-brain-scan-more-information-for-medical-artificial-intelligence\/","title":{"rendered":"From one brain scan, more information for medical artificial intelligence"},"content":{"rendered":"<p>Author: Rob Matheson | MIT News Office<\/p>\n<div>\n<p>MIT researchers have devised a novel method to glean more information from images used to train machine-learning models, including those that can analyze medical scans to help diagnose and treat brain conditions.<\/p>\n<p>An active new area in medicine involves training deep-learning models to detect structural patterns in brain scans associated with neurological diseases and disorders, such as Alzheimer\u2019s disease and multiple sclerosis. But collecting the training data is laborious: All anatomical structures in each scan must be separately outlined or hand-labeled by neurological experts. And, in some cases, such as for rare brain conditions in children, only a few scans may be available in the first place.<\/p>\n<p>In a paper presented at the recent Conference on Computer Vision and Pattern Recognition, the MIT researchers describe a system that uses a single labeled scan, along with unlabeled scans, to automatically synthesize a massive dataset of distinct training examples. The dataset can be used to better train machine-learning models to find anatomical structures in new scans \u2014 the more training data, the better those predictions.<\/p>\n<p>The crux of the work is automatically generating data for the \u201cimage segmentation\u201d process, which partitions an image into regions of pixels that are more meaningful and easier to analyze. To do so, the system uses a convolutional neural network (CNN), a machine-learning model that\u2019s become a powerhouse for image-processing tasks. The network analyzes a lot of unlabeled scans from different patients and different equipment to \u201clearn\u201d anatomical, brightness, and contrast variations. Then, it applies a random combination of those learned variations to a single labeled scan to synthesize new scans that are both realistic and accurately labeled. These newly synthesized scans are then fed into a different CNN that learns how to segment new images.<\/p>\n<p>\u201cWe\u2019re hoping this will make image segmentation more accessible in realistic situations where you don\u2019t have a lot of training data,\u201d says first author Amy Zhao, a graduate student in the Department of Electrical Engineering and Computer Science (EECS) and Computer Science and Artificial Intelligence Laboratory (CSAIL). \u201cIn our approach, you can learn to mimic the variations in unlabeled scans to intelligently synthesize a large dataset to train your network.\u201d<\/p>\n<p>There\u2019s interest in using the system, for instance, to help train predictive-analytics models at Massachusetts General Hospital, Zhao says, where only one or two labeled scans may exist of particularly uncommon brain conditions among child patients.<\/p>\n<p>Joining Zhao on the paper are: Guha Balakrishnan, a postdoc in EECS and CSAIL; EECS professors Fredo Durand and John Guttag, and senior author Adrian Dalca, who is also a faculty member in radiology at Harvard Medical School.<\/p>\n<p><strong>The \u201cMagic\u201d behind the system<\/strong><\/p>\n<p>Although now applied to medical imaging, the system actually started as a means to synthesize training data for a smartphone app that could identify and retrieve information about cards from the popular collectable card game, \u201cMagic: The Gathering.\u201d Released in the early 1990s, \u201cMagic\u201d has more than 20,000 unique cards \u2014 with more released every few months \u2014\u00a0that players can use to build custom playing decks.<\/p>\n<p>Zhao, an avid \u201cMagic\u201d player, wanted to develop a CNN-powered app that took a photo of any card with a smartphone camera and automatically pulled information such as price and rating from online card databases. \u201cWhen I was picking out cards from a game store, I got tired of entering all their names into my phone and looking up ratings and combos,\u201d Zhao says. \u201cWouldn\u2019t it be awesome if I could scan them with my phone and pull up that information?\u201d<\/p>\n<p>But she realized that\u2019s a very tough computer-vision training task. \u201cYou\u2019d need many photos of all 20,000 cards, under all different lighting conditions and angles. No one is going to collect that dataset,\u201d Zhao says.<\/p>\n<p>Instead, Zhao trained a CNN on smaller dataset of around 200 cards, with 10 distinct photos of each card,\u00a0to learn how to warp a card into various positions. It computed different lighting, angles, and reflections \u2014\u00a0for when cards are placed in plastic sleeves \u2014\u00a0to synthesized realistic warped versions of any card in the dataset. It was an exciting passion project, Zhao says: \u201cBut we realized this approach was really well-suited for medical images, because this type of warping fits really well with MRIs.\u201d<\/p>\n<p><strong>Mind warp<\/strong><\/p>\n<p>Magnetic resonance images (MRIs) are composed of three-dimensional pixels, called voxels. When segmenting MRIs, experts separate and label voxel regions based on the anatomical structure containing them. The diversity of scans, caused by variations in individual brains and equipment used, poses a challenge to using machine learning to automate this process.<\/p>\n<p>Some existing methods can synthesize training examples from labeled scans using \u201cdata augmentation,\u201d which warps labeled voxels into different positions. But these methods require experts to hand-write various augmentation guidelines, and some synthesized scans look nothing like a realistic human brain, which may be detrimental to the learning process.<\/p>\n<p>Instead, the researchers\u2019 system automatically learns how to synthesize realistic scans. The researchers trained their system on 100 unlabeled scans from real patients to compute spatial transformations<em> \u2014\u00a0<\/em>anatomical correspondences from scan to scan. This generated as many \u201cflow fields,\u201d which model how voxels move from one scan to another. Simultaneously, it computes intensity transformations, which capture appearance variations caused by image contrast, noise, and other factors.<\/p>\n<p>In generating a new scan, the system applies a random flow field to the original labeled scan, which shifts around voxels until it structurally matches a real, unlabeled scan. Then, it overlays a random intensity transformation. Finally, the system maps the labels to the new structures, by following how the voxels moved in the flow field. In the end, the synthesized scans closely resemble the real, unlabeled scans \u2014 but with accurate labels.<\/p>\n<p>To test their automated segmentation accuracy, the researchers used Dice scores, which measure how well one 3-D shape fits over another, on a scale of 0 to 1. They compared their system to traditional segmentation methods \u2014\u00a0manual and automated \u2014 on 30 different brain structures across 100 held-out test scans. Large structures were comparably accurate among all the methods. But the researchers\u2019 system outperformed all other approaches on smaller structures, such as the hippocampus, which occupies only about 0.6 percent of a brain, by volume.<\/p>\n<p>\u201cThat shows that our method improves over other methods, especially as you get into the smaller structures, which can be very important in understanding disease,\u201d Zhao says. \u201cAnd we did that while only needing a single hand-labeled scan.\u201d<\/p>\n<p>In a nod to the work\u2019s \u201cMagic\u201d roots, the code is publicly <a href=\"http:\/\/github.com\/xamyzhao\/brainstorm\">available on Github<\/a> under the name of one of the game\u2019s cards, \u201cBrainstorm.\u201d<\/p>\n<\/div>\n<p><a href=\"http:\/\/news.mit.edu\/2019\/training-artificial-intelligence-brain-scan-0619\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Rob Matheson | MIT News Office MIT researchers have devised a novel method to glean more information from images used to train machine-learning models, [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2019\/06\/19\/from-one-brain-scan-more-information-for-medical-artificial-intelligence\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":468,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/2279"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=2279"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/2279\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/471"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=2279"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=2279"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=2279"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}