{"id":4609,"date":"2021-04-29T00:01:00","date_gmt":"2021-04-29T00:01:00","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2021\/04\/29\/when-artists-and-machine-intelligence-come-together\/"},"modified":"2021-04-29T00:01:00","modified_gmt":"2021-04-29T00:01:00","slug":"when-artists-and-machine-intelligence-come-together","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2021\/04\/29\/when-artists-and-machine-intelligence-come-together\/","title":{"rendered":"When artists and machine intelligence come together"},"content":{"rendered":"<p>Author: <\/p>\n<div>\n<div class=\"block-paragraph\">\n<div class=\"rich-text\">\n<p>Throughout history, from photography to video to hypertext, artists have pushed the expressive limits of new technologies, and artificial intelligence is no exception. At I\/O 2019, Google Research and Google Arts &amp; Culture launched the <a href=\"https:\/\/experiments.withgoogle.com\/ami-grants\">Artists + Machine Intelligence Grants<\/a>, providing a range of support and technical mentorship to six artists from around the globe following an open call for proposals. The inaugural grant program sought to expand the field of artists working with Machine Learning (ML) and, through supporting pioneering artists, creatively push at the boundaries of generative ML and natural language processing.\u00a0<\/p>\n<p><\/p>\n<p>Today, we are publishing the outcomes of the grants. The projects draw from many disciplines, including rap and hip hop, screenwriting, early cinema, phonetics, Spanish language poetry, and Indian pre-modern sound. What they all have in common is an ability to challenge our assumptions about AI\u2019s creative potential.<\/p>\n<p><\/p>\n<\/div>\n<\/div>\n<div class=\"block-paragraph_with_image\">\n<div class=\"article-module h-c-page\">\n<div class=\"h-c-grid uni-paragraph-wrap\">\n<div class=\"uni-paragraph h-c-grid__col h-c-grid__col--8 h-c-grid__col-m--6 h-c-grid__col-l--6 h-c-grid__col--offset-2 h-c-grid__col-m--offset-3 h-c-grid__col-l--offset-3\" data-component=\"uni-article-paragraph\">\n<figure class=\"article-image--wrap-medium \"><img decoding=\"async\" alt=\"a graffiti-style visualization of the artwork\" src=\"https:\/\/storage.googleapis.com\/gweb-uniblog-publish-prod\/images\/Screenshot_2021-04-21_at_17.09.56.max-1000x1000.png\"><figcaption class=\"article-image__caption \">\n<div class=\"rich-text\">\n<p>Learn more about the <a href=\"https:\/\/hip-hop-poetry-bot.netlify.app\/\">Hip Hop Poetry Bot<\/a><\/p>\n<\/div>\n<\/figcaption><\/figure>\n<div class=\"rich-text\">\n<p><b>Hip Hop Poetry Bot by Alex Fefegha\u00a0\u00a0<\/b><\/p>\n<p>Can AI rap? Alex explores speech generation trained on rap and hip hop lyrics by Black artists. For the moment it exists as a proof of concept, as building the experiment in full requires a large, public dataset of rap and hip hop lyrics on which an algorithm can be trained, and such a public archive doesn\u2019t currently exist.\u00a0 The project is therefore launching with an invitation from Alex to rap and hip hop artists to become creative collaborators and contribute their lyrics to create a new, public dataset of lyrics by Black artists.\u00a0<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"block-paragraph_with_image\">\n<div class=\"article-module h-c-page\">\n<div class=\"h-c-grid uni-paragraph-wrap\">\n<div class=\"uni-paragraph h-c-grid__col h-c-grid__col--8 h-c-grid__col-m--6 h-c-grid__col-l--6 h-c-grid__col--offset-2 h-c-grid__col-m--offset-3 h-c-grid__col-l--offset-3\" data-component=\"uni-article-paragraph\">\n<figure class=\"article-image--wrap-medium \"><img decoding=\"async\" alt=\"A woman, partly smiling, in an industrial-style room\" src=\"https:\/\/storage.googleapis.com\/gweb-uniblog-publish-prod\/images\/Martine_Syms_-_Neural_Swamp.max-1000x1000.jpg\"><figcaption class=\"article-image__caption \">\n<div class=\"rich-text\">\n<p>Read more about <a href=\"https:\/\/artsandculture.google.com\/story\/_QWxHHXcmCBDxw\">Neural Swamp<\/a> <\/p>\n<\/div>\n<\/figcaption><\/figure>\n<div class=\"rich-text\">\n<p><b>Neural Swamp by Martine Syms\u00a0<\/b><\/p>\n<p>Martine uses video and performance to examine representations of blackness across generations, geographies, mediums, and traditions. For this residency, Martine developed Neural Swamp, a play staged across five screens, starring five entities who talk and sing alongside and over each other. Two of the five voices are trained on Martine\u2019s voice and generated using machine learning speech models. The project will premiere at The Philadelphia Museum of Art and Fondazione Sandretto Re Rebaudengo in Fall 2021.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"block-paragraph_with_image\">\n<div class=\"article-module h-c-page\">\n<div class=\"h-c-grid uni-paragraph-wrap\">\n<div class=\"uni-paragraph h-c-grid__col h-c-grid__col--8 h-c-grid__col-m--6 h-c-grid__col-l--6 h-c-grid__col--offset-2 h-c-grid__col-m--offset-3 h-c-grid__col-l--offset-3\" data-component=\"uni-article-paragraph\">\n<figure class=\"article-image--wrap-medium \"><img decoding=\"async\" alt=\"A dashboard with toggles for changing the letters in a sentence\" src=\"https:\/\/storage.googleapis.com\/gweb-uniblog-publish-prod\/images\/tuner.max-1000x1000.png\"><figcaption class=\"article-image__caption \">\n<div class=\"rich-text\">\n<p>Play with <a href=\"https:\/\/cilex-nonsense-lab.uc.r.appspot.com\/nonsense-laboratory\/#\">The Nonsense Laboratory<\/a><\/p>\n<\/div>\n<\/figcaption><\/figure>\n<div class=\"rich-text\">\n<p><b>The Nonsense Laboratory by Allison Parrish\u00a0\u00a0<\/b><\/p>\n<p>Allison invites you to adjust, poke at, mangle, curate and compress words with a series of playful tools in her Nonsense Laboratory. Powered by a bespoke code library and machine learning model developed by Allison Parrish you can mix and respell words, sequence mouth movements to create new words, rewrite a text so that the words feel different in your mouth, or go on a journey through a field of nonsense.\u00a0<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"block-paragraph_with_image\">\n<div class=\"article-module h-c-page\">\n<div class=\"h-c-grid uni-paragraph-wrap\">\n<div class=\"uni-paragraph h-c-grid__col h-c-grid__col--8 h-c-grid__col-m--6 h-c-grid__col-l--6 h-c-grid__col--offset-2 h-c-grid__col-m--offset-3 h-c-grid__col-l--offset-3\" data-component=\"uni-article-paragraph\">\n<figure class=\"article-image--wrap-medium \"><img decoding=\"async\" alt=\"A collage of images, in the style of old cinema film\" src=\"https:\/\/storage.googleapis.com\/gweb-uniblog-publish-prod\/images\/BACKGROUND.max-1000x1000.jpg\"><figcaption class=\"article-image__caption \">\n<div class=\"rich-text\">\n<p>Explore <a href=\"https:\/\/qa-dot-cilex-dream-again.uc.r.appspot.com\/let-me-dream-again\/\">Let Me Dream Again<\/a><\/p>\n<\/div>\n<\/figcaption><\/figure>\n<div class=\"rich-text\">\n<p><b>Let Me Dream Again by Anna Ridler\u00a0<\/b><\/p>\n<p>Anna uses machine learning to try to recreate lost films from fragments of early Hollywood and European cinema that still exist. The outcome? An endlessly evolving, algorithmically generated film and soundtrack. The film will continually play, never repeating itself, over a period of one month.<a href=\"https:\/\/docs.google.com\/document\/d\/1lGfdBFqudtxdS7G9wcxV618ir4DEJml8o-Bxv_hyt9M\/edit?usp=sharing\">\u00a0<\/a><\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"block-paragraph_with_image\">\n<div class=\"article-module h-c-page\">\n<div class=\"h-c-grid uni-paragraph-wrap\">\n<div class=\"uni-paragraph h-c-grid__col h-c-grid__col--8 h-c-grid__col-m--6 h-c-grid__col-l--6 h-c-grid__col--offset-2 h-c-grid__col-m--offset-3 h-c-grid__col-l--offset-3\" data-component=\"uni-article-paragraph\">\n<figure class=\"article-image--wrap-medium \"><img decoding=\"async\" alt=\"A woman in a desert holding a staff\" src=\"https:\/\/storage.googleapis.com\/gweb-uniblog-publish-prod\/images\/Paola_Torres_Nunez_del_Prado_-_Dama_Del_Su.max-1000x1000.jpg\"><figcaption class=\"article-image__caption \">\n<div class=\"rich-text\">\n<p>Read more about <a href=\"https:\/\/artsandculture.google.com\/story\/DwWhq_x7q_L2ZQ\">Knots of Code<\/a><\/p>\n<\/div>\n<\/figcaption><\/figure>\n<div class=\"rich-text\">\n<p><b>Knots of Code by Paola Torres N\u00fa\u00f1ez del Prado<\/b><\/p>\n<p>Paola studies the history of quipus, a pre-Columbian notation system that is based on the tying of knots in ropes, as part of a new research project, Knots of Code. The project\u2019s first work is a Spanish language poetry-album from Paola and AIELSON, an artificial intelligence system that composes and recites poetry inspired by quipus and emulating the voice of the late Peruvian poet J.E. Eielson.\u00a0<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"block-paragraph_with_image\">\n<div class=\"article-module h-c-page\">\n<div class=\"h-c-grid uni-paragraph-wrap\">\n<div class=\"uni-paragraph h-c-grid__col h-c-grid__col--8 h-c-grid__col-m--6 h-c-grid__col-l--6 h-c-grid__col--offset-2 h-c-grid__col-m--offset-3 h-c-grid__col-l--offset-3\" data-component=\"uni-article-paragraph\">\n<figure class=\"article-image--wrap-medium \"><img decoding=\"async\" alt=\"An empty stage with bells hanging on wires\" src=\"https:\/\/storage.googleapis.com\/gweb-uniblog-publish-prod\/images\/Budhaditya_Chattopadhyay_-_Dhvani.max-1000x1000.jpg\"><figcaption class=\"article-image__caption \">\n<div class=\"rich-text\">\n<p>Read more about <a href=\"https:\/\/artsandculture.google.com\/story\/hQXheyiDWD8ibw\">Dhv\u0101ni<\/a> <\/p>\n<\/div>\n<\/figcaption><\/figure>\n<div class=\"rich-text\">\n<p><b>Dhv\u0101ni by Budhaditya Chattopadhyay\u00a0<\/b><\/p>\n<p>Budhaditya brings a lifelong interest in the materiality, phenomenology, political-cultural associations, and the sociability of sound to Dhv\u0101ni, a responsive sound installation, comprising 51 temple bells and conducted with the help of machine learning. An early iteration of Dhv\u0101ni was installed at <a href=\"https:\/\/www.experimenta.fr\/dhvani\/\">EXPERIMENTA Arts &amp; Sciences Biennale 2020<\/a> in Grenoble, France.\u00a0\u00a0<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"block-paragraph\">\n<div class=\"rich-text\">\n<p>Explore the artworks at g.co\/artistsmeetai or on the free Google Arts &amp; Culture app for <a href=\"https:\/\/itunes.apple.com\/app\/google-arts-culture\/id1050970557\">iOS<\/a> and <a href=\"https:\/\/play.google.com\/store\/apps\/details?id=com.google.android.apps.cultural&amp;referrer=utm_source%3DRP%26utm_medium%3Darticle%26utm_campaign%3DGEN\">Android<\/a>.<\/p>\n<p><\/p>\n<\/div>\n<\/div>\n<\/div>\n<p><a href=\"https:\/\/blog.google\/outreach-initiatives\/arts-culture\/when-artists-and-machine-intelligence-come-together\/\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Throughout history, from photography to video to hypertext, artists have pushed the expressive limits of new technologies, and artificial intelligence is no exception. At [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2021\/04\/29\/when-artists-and-machine-intelligence-come-together\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":4610,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/4609"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=4609"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/4609\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/4610"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=4609"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=4609"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=4609"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}