{"id":6348,"date":"2023-03-10T14:00:00","date_gmt":"2023-03-10T14:00:00","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2023\/03\/10\/mit-professor-to-congress-we-are-at-an-inflection-point-with-ai\/"},"modified":"2023-03-10T14:00:00","modified_gmt":"2023-03-10T14:00:00","slug":"mit-professor-to-congress-we-are-at-an-inflection-point-with-ai","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2023\/03\/10\/mit-professor-to-congress-we-are-at-an-inflection-point-with-ai\/","title":{"rendered":"MIT professor to Congress: \u201cWe are at an inflection point\u201d with AI"},"content":{"rendered":"<p>Author: MIT Washington Office<\/p>\n<div>\n<p>Government should not \u201cabdicate\u201d its responsibilities and leave the future path of artificial intelligence solely to Big Tech, Aleksander M\u0105dry, the Cadence Design Systems Professor of Computing at MIT and director of the MIT Center for Deployable Machine Learning, told a Congressional panel on Wednesday.\u00a0<\/p>\n<p>Rather, M\u0105dry said, government should be asking questions about the purpose and explainability of the algorithms corporations are using, as a precursor to regulation, which he described as \u201can important tool\u201d in ensuring that AI is consistent with society\u2019s goals. If the government doesn\u2019t start asking questions, then \u201cI am extremely worried\u201d about the future of AI, M\u0105dry said in response to a question from Rep. Gerald Connolly.<\/p>\n<p>M\u0105dry, a leading expert on explainability and AI, was testifying at a hearing titled \u201cAdvances in AI: Are We Ready for a Tech Revolution?\u201d before the House Subcommittee on Cybersecurity, Information Technology, and Government Innovation, a panel of the House Committee on Government Reform and Oversight. The other witnesses at the hearing were former Google CEO Eric Schmidt, IBM Vice President Scott Crowder, and Center for AI and Digital Policy Senior Research Director Merve Hickok.<\/p>\n<p>In her opening remarks, Subcommittee Chair Rep. Nancy Mace cited the book \u201cThe Age of AI: And Our Human Future\u201d by Schmidt, Henry Kissinger, and Dan Huttenlocher, the dean of the MIT Schwarzman College of Computing. She also called attention to a March 3 op-ed in <em>The<\/em> <em>Wall Street Journal<\/em> by the three authors that summarized the book while discussing ChatGPT. Mace said her formal opening remarks had been entirely written by ChatGPT.<\/p>\n<p>In his prepared remarks, M\u0105dry raised three overarching points. First, he noted that AI is \u201cno longer a matter of science fiction\u201d or confined to research labs. It is out in the world, where it can bring enormous benefits but also poses risks.<\/p>\n<p>Second, he said AI exposes us to \u201cinteractions that go against our intuition.\u201d He said because AI tools like ChatGPT mimic human communication, people are too likely to unquestioningly believe what such large language models produce. In the worst case, M\u0105dry warned, human analytical skills will atrophy. He also said it would be a mistake to regulate AI as if it were human \u2014 for example, by asking AI to explain its reasoning and assuming that the resulting answers are credible.<\/p>\n<p>Finally, he said too little attention has been paid to problems that will result from the nature of the AI \u201csupply chain\u201d \u2014 the way AI systems are built on top of each other. At the base are general systems like ChatGPT, which can be developed by only a few companies because they are so expensive and complex to build. Layered on top of such systems are many AI systems designed to handle a particular task, like figuring out whom a company should hire.\u00a0<\/p>\n<p>M\u0105dry said this layering raised several \u201cpolicy-relevant\u201d concerns. First, the entire system of AI is subject to whatever vulnerabilities or biases are in the large system at its base, and is dependent on the work of a few, large companies. Second, the interaction of AI systems is not well-understood from a technical standpoint, making the results of AI even more difficult to predict or explain, and making the tools difficult to \u201caudit.\u201d Finally, the mix of AI tools makes it difficult to know whom to hold responsible when a problem results \u2014 who should be legally liable and who should address the concern.<\/p>\n<p>In the written material submitted to the subcommittee, M\u0105dry concluded, \u201cAI technology is not particularly well-suited for deployment through complex supply chains,\u201d even though that is exactly how it is being deployed.<\/p>\n<p>M\u0105dry ended his testimony by calling on Congress to probe AI issues and to be prepared to act. \u201cWe are at an inflection point in terms of what future AI will bring. Seizing this opportunity means discussing the role of AI, what exactly we want it to do for us, and how to ensure it benefits us all. This will be a difficult conversation but we do need to have it, and have it now,\u201d he told the subcommittee.<\/p>\n<p>The testimony of all the hearing witnesses and a video of the hearing, which lasted about two hours, <a href=\"https:\/\/oversight.house.gov\/hearing\/advances-in-ai-are-we-ready-for-a-tech-revolution\/\" target=\"_blank\" rel=\"noopener\">is available online<\/a>.<\/p>\n<\/div>\n<p><a href=\"https:\/\/news.mit.edu\/2023\/mit-congress-inflection-point-ai-0310\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: MIT Washington Office Government should not \u201cabdicate\u201d its responsibilities and leave the future path of artificial intelligence solely to Big Tech, Aleksander M\u0105dry, the [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2023\/03\/10\/mit-professor-to-congress-we-are-at-an-inflection-point-with-ai\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":474,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/6348"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=6348"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/6348\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/465"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=6348"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=6348"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=6348"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}