{"id":2126,"date":"2019-05-10T16:00:00","date_gmt":"2019-05-10T16:00:00","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2019\/05\/10\/dancing-with-a-machine-bill-t-jones-on-ai-and-art\/"},"modified":"2019-05-10T16:00:00","modified_gmt":"2019-05-10T16:00:00","slug":"dancing-with-a-machine-bill-t-jones-on-ai-and-art","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2019\/05\/10\/dancing-with-a-machine-bill-t-jones-on-ai-and-art\/","title":{"rendered":"&#8220;Dancing with a machine:&#8221; Bill T. Jones on AI and art"},"content":{"rendered":"<p>Author: <\/p>\n<div>\n<div class=\"block-paragraph\">\n<div class=\"rich-text\">\n<p>In early 2019, the Google Creative Lab partnered with Bill T. Jones, a pioneering choreographer, two-time Tony Award Winner, MacArthur Fellow, National Medal of the Arts Honoree, and artistic director and co-founder of the Bill T. Jones\/Arnie Zane Company of <a href=\"https:\/\/newyorklivearts.org\/\">New York Live Arts<\/a>. We teamed up to explore the creative possibilities of speech recognition and <a href=\"https:\/\/medium.com\/tensorflow\/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5\">PoseNet<\/a>, which is Google\u2019s machine-learning model that estimates human poses in real time in the browser.<\/p>\n<p>We sat down with Bill to hear his reflections on working at the intersection of art, technology, identity and the body. Try out the experiments and watch a short film about the collaboration at <a href=\"https:\/\/experiments.withgoogle.com\/admin\/experiment\/5699258518863872\/preview\">g.co\/billtjonesai<\/a>.\u00a0<\/p>\n<p><b>Why did you collaborate with Google on AI experiments?<\/b><\/p>\n<p>The idea of machine learning intrigues me. The theme of our company\u2019s <a href=\"https:\/\/newyorklivearts.org\/\">Live Ideas Fest<\/a> this year is artificial intelligence. AI is supposed to take us into the next century and important things are supposed to be happening with this technology, so I wanted to see if we could use it to stir real human emotion. Maybe it\u2019s ego, but I want to be the one to know how to use PoseNet to make somebody cry. How do you get the technology to be weighted with meaning and import?<\/p>\n<p><b>How have you experimented with technology over the course of your career?<\/b><\/p>\n<p>Back in the \u201880s, Arnie Zane [Jones\u2019s partner and company co-founder] and I decided we didn\u2019t want to work with technology anymore because the pure art of sweat and bodies on stage should be enough. Technology just steals your thunder. Then a friend said, \u201cTechnology can suggest the beyond. Technology can project what is at stake when you die. When you see these figures, they\u2019re no longer human, they\u2019re something else.\u201d So we started working with more state-of-the-art technologies. Later, I did a project called \u201cGhostcatching\u201d with 3D motion capture. At that time, the team was saying, \u201cwe want to capture your movement so that in 50 years we could reconstitute your performance.\u201d That\u2019s how people were thinking years ago, and seems to still be a preoccupation now. They said they wanted to \u201cdecouple me from my personality.\u201d Maybe I\u2019m romantic, but I don&#8217;t think that\u2019s possible. So, my focus with this project was not on how to replace the performer, but complement them.<\/p>\n<p><b>What was it like experimenting with AI?<\/b><\/p>\n<p>I\u2019ve never collaborated with a machine before. It&#8217;s a whole other learning curve. We are taught in the art world that you don\u2019t get many chances. This experience contrasted that notion. It was refreshing to co-create with the Google team whose approach was playful and iterative.<\/p>\n<p><b>Were there moments you felt this technology was in the service of dance?<\/b>\u00a0<\/p>\n<p>In the <i>service<\/i> of dance? I say this with great respect: it&#8217;s almost antithetical to everything I thought dance was. The webcam\u2019s field of vision determines a lot about how we move. Dance for us is often times in an empty room that implies infinite space. But working with a webcam, there is a very prescribed space. Limitations are not bad in art making, but they were a new challenge. It was a shift creating something for the screen and not the stage.<\/p>\n<p><b>What was it like shifting from creating for the stage to the screen?<\/b><\/p>\n<p>I felt like I was being asked: Come out of the place that you as an artist come from, the avant-garde. Come and work with a medium that&#8217;s available to millions of people. That&#8217;s wonderful, but it&#8217;s also a responsibility. The meaningful things people make with this are going to be very weird in a way, aren&#8217;t they? Very kind of exciting. I&#8217;m appreciative of being part of the development of this.<\/p>\n<p><b>Where do you see AI going? Will you work with it more in the future?<\/b>\u00a0<\/p>\n<p>I understand context is the next frontier in machine learning. This seems paramount for art making. I hope one day soon they make a machine I can dance with. I\u2019d like to dance with a machine, just to see what that\u2019s like.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<p><a href=\"https:\/\/www.blog.google\/technology\/ai\/bill-t-jones-dance-art\/\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: In early 2019, the Google Creative Lab partnered with Bill T. Jones, a pioneering choreographer, two-time Tony Award Winner, MacArthur Fellow, National Medal of [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2019\/05\/10\/dancing-with-a-machine-bill-t-jones-on-ai-and-art\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":462,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/2126"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=2126"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/2126\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/475"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=2126"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=2126"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=2126"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}