{"id":7068,"date":"2024-01-17T17:50:00","date_gmt":"2024-01-17T17:50:00","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2024\/01\/17\/stratospheric-safety-standards-how-aviation-could-steer-regulation-of-ai-in-health\/"},"modified":"2024-01-17T17:50:00","modified_gmt":"2024-01-17T17:50:00","slug":"stratospheric-safety-standards-how-aviation-could-steer-regulation-of-ai-in-health","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2024\/01\/17\/stratospheric-safety-standards-how-aviation-could-steer-regulation-of-ai-in-health\/","title":{"rendered":"Stratospheric safety standards: How aviation could steer regulation of AI in health"},"content":{"rendered":"<p>Author: Alex Ouyang | Abdul Latif Jameel Clinic for Machine Learning in Health<\/p>\n<div>\n<p>What is the likelihood of dying in a plane crash? According to a 2022 report released by the International Air Transport Association, the industry fatality risk is 0.11. In other words, on average, a person would need to take a flight every day for 25,214 years to have a 100 percent chance of experiencing a fatal accident. Long touted as one of the safest modes of transportation, the highly regulated aviation industry has MIT scientists thinking that it may hold the key to regulating artificial intelligence in health care.\u00a0<\/p>\n<p>Marzyeh Ghassemi, an assistant professor at the MIT Department of Electrical Engineering and Computer Science (EECS) and Institute of Medical Engineering Sciences, and Julie Shah, an H.N. Slater Professor of Aeronautics and Astronautics at MIT, share an interest in the challenges of transparency in AI models. After chatting in early 2023, they realized that aviation could serve as a model to ensure that marginalized patients are not harmed by biased AI models.\u00a0\u00a0<\/p>\n<p>Ghassemi, who is also a principal investigator at the MIT Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic) and the Computer Science and Artificial Intelligence Laboratory (CSAIL), and Shah then recruited a cross-disciplinary team of researchers, attorneys, and policy analysts across MIT, Stanford University, the Federation of American Scientists, Emory University, University of Adelaide, Microsoft, and the University of California San Francisco to kick off a research project, <a href=\"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3617694.3623224\" target=\"_blank\" rel=\"noopener\">the results of which<\/a> were recently accepted to the Equity and Access in Algorithms, Mechanisms and Optimization Conference.\u00a0<\/p>\n<p>\u201cI think many of our coauthors are excited about AI\u2019s potential for positive societal impacts, especially with recent advancements,\u201d says first author Elizabeth Bondi-Kelly, now an assistant professor of EECS at the University of Michigan who was a postdoc in Ghassemi\u2019s lab when the project began. \u201cBut we\u2019re also cautious and hope to develop frameworks to manage potential risks as deployments start to happen, so we were seeking inspiration for such frameworks.\u201d\u00a0<\/p>\n<p>AI in health today bears a resemblance to where the aviation industry was a century ago, says co-author Lindsay Sanneman, a PhD student in the Department of Aeronautics and Astronautics at MIT. Though the 1920s were known as \u201cthe Golden Age of Aviation,\u201d <a href=\"https:\/\/www.mackinac.org\/V2003-30\" target=\"_blank\" rel=\"noopener\">fatal accidents were \u201cdisturbingly numerous,\u201d<\/a> according to the Mackinac Center for Public Policy.\u00a0\u00a0<\/p>\n<p>Jeff Marcus, the current chief of the National Transportation Safety Board (NTSB) Safety Recommendations Division, recently published <a href=\"https:\/\/safetycompass.wordpress.com\/2023\/11\/27\/how-tragedy-led-to-trust-national-aviation-history-month\/\">a National Aviation Month blog post<\/a> noting that while a number of fatal accidents occurred in the 1920s, 1929 remains the \u201cworst year on record\u201d for the most fatal aviation accidents in history, with 51 reported accidents. By today\u2019s standards that would be 7,000 accidents per year, or 20 per day. In response to the high number of fatal accidents in the 1920s, President Calvin Coolidge passed landmark legislation in 1926 known as the Air Commerce Act, which would regulate air travel via the Department of Commerce.\u00a0<\/p>\n<p>But the parallels do not stop there \u2014 aviation\u2019s subsequent path into automation is similar to AI\u2019s. AI explainability has been a contentious topic given AI\u2019s notorious \u201cblack box\u201d problem, which has AI researchers debating how much an AI model must \u201cexplain\u201d its result to the user before potentially biasing them to blindly follow the model\u2019s guidance. \u00a0<\/p>\n<p>\u201cIn the 1970s there was an increasing amount of automation &#8230; autopilot systems that take care of warning pilots about risks,\u201d Sanneman adds. \u201cThere were some growing pains as automation entered the aviation space in terms of human interaction with the autonomous system \u2014 potential confusion that arises when the pilot doesn&#8217;t have keen awareness about what the automation is doing.\u201d\u00a0<\/p>\n<p>Today, becoming a commercial airline captain requires 1,500 hours of logged flight time along with instrument trainings. According to the researchers&#8217; <a href=\"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3617694.3623224\" target=\"_blank\" rel=\"noopener\">paper<\/a>, this rigorous and comprehensive process takes approximately 15 years, including a bachelor\u2019s degree and co-piloting. Researchers believe the success of extensive pilot training could be a potential model for training medical doctors on using AI tools in clinical settings.\u00a0<\/p>\n<p>The paper also proposes encouraging reports of unsafe health AI tools in the way the Federal Aviation Agency (FAA) does for pilots \u2014 via \u201climited immunity\u201d, which allows pilots to retain their license after doing something unsafe, as long as it was unintentional.\u00a0<\/p>\n<p>According to a <a href=\"https:\/\/iris.who.int\/bitstream\/handle\/10665\/343477\/9789240032705-eng.pdf?sequence=1\">2023 report<\/a> published by the World Health Organization, on average, one in every 10 patients is harmed by an adverse event (i.e., \u201cmedical errors\u201d) while receiving hospital care in high-income countries.\u00a0<\/p>\n<p>Yet in current health care practice, clinicians and health care workers often fear reporting medical errors, not only because of concerns related to guilt and self-criticism, but also due to negative consequences that emphasize the punishment of individuals, such as a revoked medical license, rather than reforming the system that made medical error more likely to occur.\u00a0\u00a0<\/p>\n<p>\u201cIn health, when the hammer misses, patients suffer,\u201d wrote Ghassemi in a recent <a href=\"https:\/\/www.nature.com\/articles\/s41562-023-01721-7\" target=\"_blank\" rel=\"noopener\">comment published in <em>Nature Human Behavior<\/em><\/a>. \u201cThis reality presents an unacceptable ethical risk for medical AI communities who are already grappling with complex care issues, staffing shortages, and overburdened systems.\u201d\u00a0<\/p>\n<p>Grace Wickerson, co-author and health equity policy manager at the Federation of American Scientists, sees this new paper as a critical addition to a broader governance framework that is not yet in place. \u201cI think there&#8217;s a lot that we can do with existing government authority,\u201d they say. \u201cThere&#8217;s different ways that Medicare and Medicaid can pay for health AI that makes sure that equity is considered in their purchasing or reimbursement technologies, the NIH [National Institute of Health] can fund more research in making algorithms more equitable and build standards for these algorithms that could then be used by the FDA [Food and Drug Administration] as they&#8217;re trying to figure out what health equity means and how they&#8217;re regulated within their current authorities.\u201d\u00a0<\/p>\n<p>Among others, the paper lists six primary existing government agencies that could help regulate health AI, including: the FDA, the Federal Trade Commission (FTC), the recently established Advanced Research Projects Agency for Health, the Agency for Healthcare Research and Quality, the Centers for Medicare and Medicaid, the Department of Health and Human Services, and the Office of Civil Rights (OCR).\u00a0\u00a0<\/p>\n<p>But Wickerson says that more needs to be done. The most challenging part to writing the paper, in Wickerson\u2019s view, was \u201cimagining what we don\u2019t have yet.\u201d\u00a0\u00a0<\/p>\n<p>Rather than solely relying on existing regulatory bodies, the paper also proposes creating an independent auditing authority, similar to the NTSB, that allows for a safety audit for malfunctioning health AI systems.\u00a0<\/p>\n<p>\u201cI think that&#8217;s the current question for tech governance \u2014 we haven&#8217;t really had an entity that&#8217;s been assessing the impact of technology since the &#8217;90s,\u201d Wickerson adds. \u201cThere used to be an Office of Technology Assessment &#8230; before the digital era even started, this office existed and then the federal government allowed it to sunset.\u201d\u00a0<\/p>\n<p>Zach Harned, co-author and recent graduate of Stanford Law School, believes a primary challenge in emerging technology is having technological development outpace regulation. \u201cHowever, the importance of AI technology and the potential benefits and risks it poses, especially in the health-care arena, has led to a flurry of regulatory efforts,\u201d Harned says. \u201cThe FDA is clearly the primary player here, and they\u2019ve consistently issued guidances and white papers attempting to illustrate their evolving position on AI; however, privacy will be another important area to watch, with enforcement from OCR on the HIPAA [Health Insurance Portability and Accountability Act] side and the FTC enforcing privacy violations for non-HIPAA covered entities.\u201d\u00a0<\/p>\n<p>Harned notes that the area is evolving fast, including developments such as the recent White House <a href=\"https:\/\/www.whitehouse.gov\/briefing-room\/presidential-actions\/2023\/10\/30\/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence\/\">Executive Order 14110<\/a> on the safe and trustworthy development of AI, as well as regulatory activity in the European Union (EU), including the capstone EU AI Act that is nearing finalization. \u201cIt\u2019s certainly an exciting time to see this important technology get developed and regulated to ensure safety while also not stifling innovation,\u201d he says.\u00a0<\/p>\n<p>In addition to regulatory activities, the paper suggests other opportunities to create incentives for safer health AI tools such as a pay-for-performance program, in which insurance companies reward hospitals for good performance (though researchers recognize that this approach would require additional oversight to be equitable).\u00a0\u00a0<\/p>\n<p>So just how long do researchers think it would take to create a working regulatory system for health AI? According to the paper, \u201cthe NTSB and FAA system, where investigations and enforcement are in two different bodies, was created by Congress over decades.\u201d\u00a0<\/p>\n<p>Bondi-Kelly hopes that the paper is a piece to the puzzle of AI regulation. In her mind, \u201cthe dream scenario would be that all of us read the paper and are inspired to apply some of the helpful lessons from aviation to help AI to prevent some of the potential AI harms during deployment.\u201d<\/p>\n<p>In addition to Ghassemi, Shah, Bondi-Kelly, and Sanneman, MIT co-authors on the work include Senior Research Scientist Leo Anthony Celi and former postdocs\u00a0Thomas Hartvigsen and Swami Sankaranarayanan. Funding for the work came, in part, from an MIT CSAIL METEOR Fellowship, Quanta Computing, the Volkswagen Foundation, the National Institutes of Health, the Herman L. F. von Helmholtz Career Development Professorship and a CIFAR Azrieli Global Scholar award.<\/p>\n<\/div>\n<p><a href=\"https:\/\/news.mit.edu\/2024\/stratospheric-safety-standards-how-aviation-could-steer-ai-health-regulation-0117\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Alex Ouyang | Abdul Latif Jameel Clinic for Machine Learning in Health What is the likelihood of dying in a plane crash? According to [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2024\/01\/17\/stratospheric-safety-standards-how-aviation-could-steer-regulation-of-ai-in-health\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":474,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/7068"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=7068"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/7068\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/460"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=7068"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=7068"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=7068"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}