{"id":1434,"date":"2018-12-21T19:00:00","date_gmt":"2018-12-21T19:00:00","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2018\/12\/21\/how-we-worked-to-make-ai-for-everyone-in-2018\/"},"modified":"2018-12-21T19:00:00","modified_gmt":"2018-12-21T19:00:00","slug":"how-we-worked-to-make-ai-for-everyone-in-2018","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2018\/12\/21\/how-we-worked-to-make-ai-for-everyone-in-2018\/","title":{"rendered":"How we worked to make AI for everyone in 2018"},"content":{"rendered":"<p>Author: <\/p>\n<div>\n<div class=\"block-paragraph\">\n<div class=\"rich-text\">\n<p>Seeing music. Predicting earthquake aftershocks. Finding emojis in real life. These are just a few examples of how researchers, engineers and user-experience (UX) professionals made imaginative ideas real. They made it happen using tools and techniques developed by Google\u2019s People + AI Research (PAIR) team in 2018.<\/p>\n<p>We founded PAIR in 2017 to\u00a0<a href=\"https:\/\/ai.google\/research\/teams\/brain\/pair\">conduct research<\/a>,\u00a0<a href=\"https:\/\/medium.com\/google-design\/human-centered-machine-learning-a770d10562cd\">create design frameworks<\/a>\u00a0and\u00a0<a href=\"https:\/\/github.com\/PAIR-code\">build new technologies<\/a>\u00a0that help make partnerships between humans and artificial intelligence productive, enjoyable and fair. One of our main goals is to create easy-to-use tools to\u00a0<a href=\"https:\/\/pair-code.github.io\/facets\/\">visualize machine learning (ML) datasets<\/a>\u00a0and\u00a0<a href=\"https:\/\/js.tensorflow.org\/\">train ML models<\/a>\u00a0(the mathematical equations that represent the steps a machine will complete to make a decision) in browsers. Put simply, this means anyone with an internet connection can now use ML.<\/p>\n<p>Here\u2019s what PAIR has accomplished over the past year\u2014and here\u2019s how engineers and UX teams can put our resources to use in 2019 and beyond.<\/p>\n<h3>Creating a design library\u2014and learning how to design for AI<\/h3>\n<p>In January, we launched\u00a0<a href=\"https:\/\/design.google\/library\/ai\/\">a library<\/a>\u00a0of user-experience articles and case studies on\u00a0<a href=\"https:\/\/design.google\/\">Google Design<\/a>. These show how Google makes decisions to balance our users\u2019 needs for familiarity and trust with new functionality and experiences enabled by AI. The case studies go behind the scenes to show how Google teams developed user experiences for applications, like the fun mobile game\u00a0<a href=\"https:\/\/design.google\/library\/designing-emoji-scavengerhunt\/\">Emoji Scavenger Hunt<\/a>.<\/p>\n<p>In these articles, practicing user-experience designers offer clear how-tos. They address challenges in designing for AI, such as balancing how to design for habits like swiping or scrolling in certain directions, and building\u00a0<a href=\"https:\/\/design.google\/library\/predictably-smart\/\">personalized experiences<\/a>\u00a0for individual users. We know we don\u2019t have all the answers, so we also seek advice from outside experts, like Paola Antonelli, Senior Curator of Architecture and Design at New York\u2019s Museum of Modern Art (MoMA), who\u00a0<a href=\"https:\/\/design.google\/library\/ai-designs-latest-material\/\">answered our team\u2019s questions\u00a0<\/a>on how to use AI as a design material itself.<\/p>\n<h3>Talking about AI across disciplines<\/h3>\n<p>A key part of our process is partnering with domain experts in other fields. For example, this year we worked with Harvard\u2019s Brendan Meade and the University of Connecticut\u2019s Phoebe de Vries on a model for <a href=\"https:\/\/www.blog.google\/technology\/ai\/forecasting-earthquake-aftershock-locations-ai-assisted-science\/\">predicting and visualizing earthquake aftershocks<\/a>. This project led to a state-of-the-art model for aftershock prediction&#8211;and, intriguingly, our analysis of the \u00a0AI suggested new, unexpected directions for human researchers to investigate.<\/p>\n<p>In March, we \u00a0hosted our first <a href=\"https:\/\/sites.google.com\/corp\/view\/pair-ux-symposium-march-2018\/speakers-talks?authuser=0\">UX symposium<\/a> in Zurich, featuring external researchers and industry professionals. And in May, we held a panel at \u00a0I\/O, \u201c<a href=\"https:\/\/www.youtube.com\/watch?time_continue=106&#038;v=_JCImtDa0Jk\">AI for Everyone<\/a>,\u201d featuring Google engineering leaders with a spectrum of expertise, from cloud computing to climate science, to discuss fair and inclusive AI in these fields.<\/p>\n<p>We\u2019re also dedicated to translating the complicated language behind AI for everyone who uses it, even if they\u2019re not engineers. Since June, our first PAIR writer-in-residence, tech journalist <a href=\"https:\/\/www.linkedin.com\/in\/david-weinberger-0131\/\">David Weinberger<\/a>, has been embedded in PAIR\u2019s Cambridge, Mass. lab. He\u2019s explaining key AI concepts, like <a href=\"https:\/\/accelerate.withgoogle.com\/stories\/ai-outside-in-machine-learnings-triangle-of-error\">classification<\/a> and <a href=\"https:\/\/accelerate.withgoogle.com\/stories\/ai-outside-in-confidence-everywhere\">confidence levels<\/a>, and timely topics like <a href=\"https:\/\/pair-code.github.io\/what-if-tool\/ai-fairness.html\">fairness in machine learning<\/a>, for non-technical audiences.<\/p>\n<h3>New open-source tools for engineers, UXers and beyond<\/h3>\n<\/div>\n<\/div>\n<div class=\"block-image_full_width\">\n<div class=\"article-module h-c-page\">\n<div class=\"h-c-grid\">\n<figure class=\"article-image--large h-c-grid__col h-c-grid__col--6 h-c-grid__col--offset-3 \"><img decoding=\"async\" alt=\"Seeing Music\" src=\"https:\/\/storage.googleapis.com\/gweb-uniblog-publish-prod\/original_images\/Seeing_Music.gif\"><figcaption class=\"article-image__caption \">\n<div class=\"rich-text\">\n<p>Using TensorFlow.js, an open-source Javascript library created by PAIR, and other software, a group of musicians, designers, engineers and the Google Creative Lab created <a href=\"https:\/\/experiments.withgoogle.com\/seeing-music\">Seeing Music<\/a>, which makes it possible to visualize subtle textures in sound.<\/p>\n<\/div>\n<\/figcaption><\/figure>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"block-paragraph\">\n<div class=\"rich-text\">\n<p>We believe in applying deep insights to invent, and open-source, new technologies that can be used by engineers, UX professionals, and other stakeholders who may not be experts in ML.<\/p>\n<p>So we started <a href=\"https:\/\/js.tensorflow.org\/\">TensorFlow.js<\/a>, a pure Javascript library that extends <a href=\"https:\/\/www.tensorflow.org\/\">TensorFlow<\/a>\u00a0into the browser. Since open-sourcing TensorFlow.js in March, we&#8217;ve seen a variety of applications\u2013including a set of <a href=\"https:\/\/experiments.withgoogle.com\/collection\/creatability\">accessible creative tools<\/a> for drawing, making music and more, designed by Google\u2019s Creative Lab with collaborators from the accessibility community.<\/p>\n<\/div>\n<\/div>\n<div class=\"block-video\">\n<div class=\"h-c-page h-c-page--mobile-full-bleed\">\n<div class=\"h-c-grid\">\n<div class=\"h-c-grid__col h-c-grid__col-l--12 \">\n<div class=\"article-module article-video \">\n<figure><a class=\"h-c-video h-c-video--marquee\" data-glue-modal-disabled-on-mobile=\"true\" data-glue-modal-trigger=\"uni-modal-c5-bHJqtQS0-\" href=\"https:\/\/youtube.com\/watch?v=c5-bHJqtQS0\"><img decoding=\"async\" alt=\"Creatability: Exploring ways to make creative tools more accessible for everyone\" src=\"https:\/\/img.youtube.com\/vi\/c5-bHJqtQS0\/maxresdefault.jpg\"><svg class=\"h-c-video__play h-c-icon h-c-icon--color-white\" role=\"img\"><use xlink:href=\"#mi-youtube-icon\"><\/use><\/svg><\/a><\/figure>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"h-c-modal--video\" data-glue-modal=\"uni-modal-c5-bHJqtQS0-\" data-glue-modal-close-label=\"Close Dialog\"><a class=\"glue-yt-video\" data-glue-yt-video-autoplay=\"true\" data-glue-yt-video-height=\"99%\" data-glue-yt-video-vid=\"c5-bHJqtQS0\" data-glue-yt-video-width=\"100%\" href=\"https:\/\/youtube.com\/watch?v=c5-bHJqtQS0\" ng-cloak=\"\"><\/a><\/div>\n<\/div>\n<div class=\"block-paragraph\">\n<div class=\"rich-text\">\n<p>Our PAIR team also built the\u00a0<a href=\"https:\/\/pair-code.github.io\/what-if-tool\/\">What-If Tool<\/a>, released this fall, so professionals building ML systems don\u2019t have to write a single line of code to answer \u201cwhat if\u201d questions such as: \u201cWhat if I changed data points, how would this affect my model\u2019s predictions? Does it perform differently for various groups\u2013for example, historically marginalized people?&#8221; Our tool makes it possible to simply click a button to visualize and inspect alternative scenarios.<\/p>\n<\/p>\n<p>Also this year, our team developed and open-sourced <a href=\"https:\/\/github.com\/tensorflow\/tcav\/\">a new technique<\/a> for helping people more easily understand the inner workings of neural networks in terms of simple, human-understandable concepts \u2013 like showing how AI can recognize images of zebras by their stripes.<\/p>\n<p>In 2019, we\u2019re excited to expand PAIR\u2019s work further with global audiences of engineers and user-experience designers\u2013and everyday users. For more resources, updates and information on our research,\u00a0<a href=\"https:\/\/ai.google\/research\/teams\/brain\/pair\">head to PAIR\u2019s website<\/a>.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<p><a href=\"https:\/\/www.blog.google\/technology\/ai\/how-we-worked-make-ai-everyone-2018\/\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Seeing music. Predicting earthquake aftershocks. Finding emojis in real life. These are just a few examples of how researchers, engineers and user-experience (UX) professionals [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2018\/12\/21\/how-we-worked-to-make-ai-for-everyone-in-2018\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":1435,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/1434"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=1434"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/1434\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/1435"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=1434"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=1434"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=1434"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}