{"id":1335,"date":"2018-11-27T19:15:00","date_gmt":"2018-11-27T19:15:00","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2018\/11\/27\/student-group-explores-the-ethical-dimensions-of-artificial-intelligence\/"},"modified":"2018-11-27T19:15:00","modified_gmt":"2018-11-27T19:15:00","slug":"student-group-explores-the-ethical-dimensions-of-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2018\/11\/27\/student-group-explores-the-ethical-dimensions-of-artificial-intelligence\/","title":{"rendered":"Student group explores the ethical dimensions of artificial intelligence"},"content":{"rendered":"<p>Author: Kim Martineau | MIT Quest for Intelligence<\/p>\n<div>\n<p>For years, the tech industry followed a move-fast-and-break-things approach, and few people seemed to mind as a wave of astonishing new tools for communicating and navigating the world appeared on the market.<\/p>\n<p>Now, amid rising concerns about the spread of fake news, the misuse of personal data, and the potential for machine-learning algorithms to discriminate at scale, people are taking stock of what the industry broke. Into this moment of reckoning come three MIT students,\u00a0<a href=\"http:\/\/irenechen.net\/\">Irene Chen<\/a>,\u00a0<a href=\"http:\/\/people.csail.mit.edu\/lgilpin\/\">Leilani Gilpin<\/a>, and\u00a0<a href=\"http:\/\/harinisuresh.com\/\">Harini Suresh<\/a>, who are the founders of the new\u00a0<a href=\"https:\/\/mitaiethics.github.io\/\">MIT AI Ethics Reading Group<\/a>.<\/p>\n<p>All three are graduate students in the Department of Electrical Engineering and Computer Science (EECS) who\u00a0had done stints in Silicon Valley, where they saw firsthand how technology developed with good intentions could go horribly wrong.<\/p>\n<p>\u201cAI is so cool,\u201d said\u00a0Chen during a chat in Lobby 7 on a recent morning. \u201cIt\u2019s so powerful. But sometimes it scares me.\u201d\u00a0<\/p>\n<p>The founders\u00a0had debated the promise and perils of AI in class and among friends, but their push to reach a wider audience came in September, at a Google-sponsored\u00a0<a href=\"https:\/\/sites.google.com\/view\/mlfairnessworkshop\/\">fairness in machine learning workshop<\/a>\u00a0in Cambridge. There, an MIT professor\u00a0floated the idea of an ethics forum and put the three women in touch.\u00a0<\/p>\n<p>Then when\u00a0MIT announced plans last month to create the\u00a0<a href=\"http:\/\/news.mit.edu\/2018\/mit-reshapes-itself-stephen-schwarzman-college-of-computing-1015\">Stephen A. Schwartzman College of Computing<\/a>, they launched the\u00a0<a href=\"https:\/\/mitaiethics.github.io\/\">MIT AI Ethics Reading Group<\/a>. Amid\u00a0the enthusiasm following the Schwartzman announcement, more than 60 people turned up to their first meeting.\u00a0<\/p>\n<p>One was Sacha Ghebali, a\u00a0master\u2019s student at the\u00a0<a href=\"http:\/\/mitsloan.mit.edu\/\">MIT Sloan School of Management<\/a>. He had taken a required ethics course in his finance program at MIT and was eager to learn more.<\/p>\n<p>\u201cWe\u2019re building tools that have a lot of leverage,\u201d he says. \u201cIf you don\u2019t build them properly, you can do a lot of harm. You need to be constantly thinking about ethics.\u201d<\/p>\n<p>On a recent night, Ghebali was among those returning for a second night of discussion. They gathered around a stack of pizza boxes in an empty classroom as Gilpin kicked off the meeting by recapping the fatal crash last spring in which a self-driving Uber struck a pedestrian. Who should be liable, Gilpin asked, the engineer who programmed the car or the person behind the wheel?<\/p>\n<p>A lively debate followed.\u00a0The students then broke into small groups as the conversation shifted to how ethics should be taught: either as a stand-alone course,\u00a0or integrated throughout the curriculum. They considered two models: Harvard, which embeds philosophy and moral reasoning into its computer science classes, and Santa Clara University, in Silicon Valley, which offers a case study-based module on ethics within its introductory data science courses.\u00a0<\/p>\n<p>Reactions in the room were mixed.<\/p>\n<p>\u201cIt\u2019s hard to teach ethics in a CS class so maybe there should be separate classes,\u201d one student offered. Others thought ethics should be integrated at each level of technical training.\u00a0<\/p>\n<p>\u201cWhen you learn to code, you learn a design process,\u201d said Natalie Lao, an EECS graduate student helping to develop AI courses for K-12 students. \u201cIf you include ethics into your design practice you learn to internalize ethical programming as part of your work flow.\u201d<\/p>\n<p>The students also debated whether stakeholders beyond the end-user should be considered. \u201cI was never taught when I\u2019m building something to talk to all the people it will effect,\u201d Suresh told the group. \u201cThat could be really useful.\u201d<\/p>\n<p>How MIT should teach ethics in the College of Computing era remains unclear, says Abelson, the Class of 1922 Professor of Computer Science and Electrical Engineering who helped start the group and was at both meetings. \u201cThis is really just the beginning,\u201d he says. \u201cFive years ago, we weren\u2019t even talking about people shutting down the steering wheel of your car.\u201d<\/p>\n<p>As AI continues to evolve, questions of safety and fairness will remain a foremost concern. In their research at MIT, the founders of the ethics reading group are simultaneously developing tools to address the dilemmas raised in the group.\u00a0<\/p>\n<p>Gilpin is creating the methodologies and tools to help self-driving cars and other autonomous machines explain themselves. For these machines to be truly safe and widely trusted, she says, they need to be able to interpret their actions and learn from their mistakes.\u00a0<\/p>\n<p>Suresh is developing algorithms that make it easier for people to use data responsibly. In a summer internship with Google, she looked at how algorithms trained on Google News and other text-based datasets pick up on certain features to learn biased associations. Identifying sources of bias in the data pipeline, she says, is key to avoiding more serious problems in downstream applications.\u00a0<\/p>\n<p>Chen, formerly a data scientist and chief of staff at DropBox, develops machine learning tools for health care. In a new paper,\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/1805.12002.pdf\">Why Is My Classifier Discriminatory<\/a>, she argues that the fairness of AI predictions should be measured and corrected by collecting more data, not just by tweaking the model. She presents her paper next month at the world\u2019s largest machine-learning conference, Neural Information Processing Systems.<\/p>\n<p>\u201cSo many of the problems at Dropbox, and now in my research at MIT, are completely new,\u201d she says. \u201cThere isn&#8217;t a playbook. Part of the fun and challenge of working on AI is that you&#8217;re making it up as you go.\u201d<\/p>\n<p>The AI-ethics group holds its last two meeting of the semester on Nov. 28 and Dec. 12.<\/p>\n<\/div>\n<p><a href=\"http:\/\/news.mit.edu\/2018\/mit-student-group-explores-artificial-intelligence-ethics-1127\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Kim Martineau | MIT Quest for Intelligence For years, the tech industry followed a move-fast-and-break-things approach, and few people seemed to mind as a [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2018\/11\/27\/student-group-explores-the-ethical-dimensions-of-artificial-intelligence\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":475,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/1335"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=1335"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/1335\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/473"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=1335"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=1335"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=1335"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}