{"id":7752,"date":"2024-11-19T20:30:00","date_gmt":"2024-11-19T20:30:00","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2024\/11\/19\/a-model-of-virtuosity\/"},"modified":"2024-11-19T20:30:00","modified_gmt":"2024-11-19T20:30:00","slug":"a-model-of-virtuosity","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2024\/11\/19\/a-model-of-virtuosity\/","title":{"rendered":"A model of virtuosity"},"content":{"rendered":"<p>Author: Nicole Estvanik Taylor | Arts at MIT<\/p>\n<div>\n<p dir=\"ltr\" id=\"docs-internal-guid-73b69c58-7fff-15b7-2349-db61db4722ab\">A crowd gathered at the MIT Media Lab in September for a concert by musician Jordan Rudess and two collaborators. One of them, violinist and vocalist Camilla B\u00e4ckman, has performed with Rudess before. The other \u2014 an artificial intelligence model informally dubbed the jam_bot, which Rudess developed with an MIT team over the preceding several months \u2014 was making its public debut as a work in progress.<\/p>\n<p dir=\"ltr\">Throughout the show, Rudess and B\u00e4ckman exchanged the signals and smiles of experienced musicians finding a groove together. Rudess\u2019 interactions with the jam_bot suggested a different and unfamiliar kind of exchange. During one duet inspired by Bach, Rudess alternated between playing a few measures and allowing the AI to continue the music in a similar baroque style. Each time the model took its turn, a range of expressions moved across Rudess\u2019 face: bemusement, concentration, curiosity. At the end of the piece, Rudess admitted to the audience, \u201cThat is a combination of a whole lot of fun and really, really challenging.\u201d<\/p>\n<p dir=\"ltr\">Rudess is an acclaimed keyboardist \u2014\u00a0the best of all time, according to one\u00a0Music Radar magazine poll \u2014 known for his work with the platinum-selling, Grammy-winning progressive metal band Dream Theater, which embarks this fall on a 40th anniversary tour. He is also a solo artist whose latest album, \u201c<a href=\"https:\/\/www.jordanrudess.com\/announcing-my-new-solo-album-permission-to-fly-and-new-single-the-alchemist\/\">Permission to Fly<\/a>,\u201d was released on Sept. 6; an educator who shares his skills through detailed online tutorials; and the founder of software company Wizdom Music. His work combines a rigorous classical foundation (he began his piano studies at The Juilliard School at age 9) with a genius for improvisation and an appetite for experimentation.<\/p>\n<p dir=\"ltr\">Last spring, Rudess became a visiting artist with the MIT Center for Art, Science and Technology (CAST), collaborating with the MIT Media Lab\u2019s Responsive Environments research group on the creation of new AI-powered music technology. Rudess\u2019 main collaborators in the enterprise are Media Lab graduate students Lancelot Blanchard, who researches musical applications of generative AI (informed by his own studies in classical piano), and Perry Naseck, an artist and engineer specializing in interactive, kinetic, light- and time-based media. Overseeing the project is Professor Joseph Paradiso, head of the Responsive Environments group and a longtime Rudess fan. Paradiso arrived at the Media Lab in 1994 with a CV in physics and engineering and a sideline designing and building synthesizers to explore his avant-garde musical tastes. His group has a tradition of investigating musical frontiers through novel user interfaces, sensor networks, and unconventional datasets.<\/p>\n<p dir=\"ltr\">The researchers set out to develop a machine learning model channeling Rudess\u2019 distinctive musical style and technique. In a\u00a0<a href=\"https:\/\/mit-genai.pubpub.org\/pub\/iz684jjr\/release\/1\">paper<\/a> published online by MIT Press in September, co-authored with MIT music technology professor Eran Egozy, they articulate their vision for what they call \u201csymbiotic virtuosity:\u201d for human and computer to duet in real-time, learning from each duet they perform together, and making performance-worthy new music in front of a live audience.<\/p>\n<p dir=\"ltr\">Rudess contributed the data on which Blanchard trained the AI model. Rudess also provided continuous testing and feedback, while Naseck experimented with ways of visualizing the technology for the audience.<\/p>\n<p dir=\"ltr\">\u201cAudiences are used to seeing lighting, graphics, and scenic elements at many concerts, so we needed a platform to allow the AI to build its own relationship with the audience,\u201d Naseck says. In early demos, this took the form of a sculptural installation with illumination that shifted each time the AI changed chords. During the concert on Sept. 21, a grid of petal-shaped panels mounted behind Rudess came to life through choreography based on the activity and future generation of the AI model.<\/p>\n<p dir=\"ltr\">\u201cIf you see jazz musicians make eye contact and nod at each other, that gives anticipation to the audience of what\u2019s going to happen,\u201d says Naseck. \u201cThe AI is effectively generating sheet music and then playing it. How do we show what\u2019s coming next and communicate that?\u201d<\/p>\n<p dir=\"ltr\">Naseck designed and programmed the structure from scratch at the Media Lab with assistance from Brian Mayton (mechanical design) and Carlo Mandolini (fabrication), drawing some of its movements from an experimental machine learning model developed by visiting student Madhav Lavakare that maps music to points moving in space. With the ability to spin and tilt its petals at speeds ranging from subtle to dramatic, the kinetic sculpture distinguished the AI\u2019s contributions during the concert from those of the human performers, while conveying the emotion and energy of its output: swaying gently when Rudess took the lead, for example, or furling and unfurling like a blossom as the AI model generated stately chords for an improvised adagio. The latter was one of Naseck\u2019s favorite moments of the show.<\/p>\n<p dir=\"ltr\">\u201cAt the end, Jordan and Camilla left the stage and allowed the AI to fully explore its own direction,\u201d he recalls. \u201cThe sculpture made this moment very powerful \u2014 it allowed the stage to remain animated and intensified the grandiose nature of the chords the AI played. The audience was clearly captivated by this part, sitting at the edges of their seats.\u201d<\/p>\n<p dir=\"ltr\">\u201cThe goal is to create a musical visual experience,\u201d says Rudess,\u00a0\u201cto show what\u2019s possible and to up the game.\u201d<\/p>\n<p dir=\"ltr\"><strong>Musical futures<\/strong><\/p>\n<p dir=\"ltr\">As the starting point for his model, Blanchard used a music transformer, an open-source neural network architecture developed by MIT Assistant Professor Anna Huang SM \u201908, who joined the MIT faculty in September.<\/p>\n<p dir=\"ltr\">\u201cMusic transformers work in a similar way as large language models,\u201d Blanchard explains. \u201cThe same way that ChatGPT would generate the most probable next word, the model we have would predict the most probable next notes.\u201d<\/p>\n<p dir=\"ltr\">Blanchard fine-tuned the model using Rudess\u2019 own playing of elements from bass lines to chords to melodies, variations of which Rudess recorded in his New York studio. Along the way, Blanchard ensured the AI would be nimble enough to respond in real-time to Rudess\u2019 improvisations.<\/p>\n<p dir=\"ltr\">\u201cWe reframed the project,\u201d says Blanchard, \u201cin terms of musical futures that were hypothesized by the model and that were only being realized at the moment based on what Jordan was deciding.\u201d<\/p>\n<p dir=\"ltr\">As Rudess puts it: \u201cHow can the AI respond \u2014\u00a0how can I have a dialogue with it? That\u2019s the cutting-edge part of what we\u2019re doing.\u201d<\/p>\n<p dir=\"ltr\">Another priority emerged: \u201cIn the field of generative AI and music, you hear about startups like Suno or Udio that are able to generate music based on text prompts. Those are very interesting, but they lack controllability,\u201d says Blanchard. \u201cIt was important for Jordan to be able to anticipate what was going to happen. If he could see the AI was going to make a decision he didn\u2019t want, he could restart the generation or have a kill switch so that he can take control again.\u201d<\/p>\n<p dir=\"ltr\">In addition to giving Rudess a screen previewing the musical decisions of the model, Blanchard built in different modalities the musician could activate as he plays \u2014 prompting the AI to generate chords or lead melodies, for example, or initiating a call-and-response pattern.<\/p>\n<p dir=\"ltr\">\u201cJordan is the mastermind of everything that\u2019s happening,\u201d he says.<\/p>\n<p dir=\"ltr\"><strong>What would Jordan do<\/strong><\/p>\n<p dir=\"ltr\">Though the residency has wrapped up, the collaborators see many paths for continuing the research. For example, Naseck would like to experiment with more ways Rudess could interact directly with his installation, through features like capacitive sensing. \u201cWe hope in the future we\u2019ll be able to work with more of his subtle motions and posture,\u201d Naseck says.<\/p>\n<p dir=\"ltr\">While the MIT collaboration focused on how Rudess can use the tool to augment his own performances, it\u2019s easy to imagine other applications. Paradiso recalls an early encounter with the tech: \u201cI played a chord sequence, and Jordan\u2019s model was generating the leads. It was like having a musical \u2018bee\u2019 of Jordan Rudess buzzing around the melodic foundation I was laying down, doing something like Jordan would do, but subject to the simple progression I was playing,\u201d he recalls, his face echoing the delight he felt at the time. \u201cYou&#8217;re going to see AI plugins for your favorite musician that you can bring into your own compositions, with some knobs that let you control the particulars,\u201d he posits. \u201cIt\u2019s that kind of world we\u2019re opening up with this.\u201d<\/p>\n<p dir=\"ltr\">Rudess is also keen to explore educational uses. Because the samples he recorded to train the model were similar to ear-training exercises he\u2019s used with students, he thinks the model itself could someday be used for teaching. \u201cThis work has legs beyond just entertainment value,\u201d he says.<\/p>\n<p dir=\"ltr\">The foray into artificial intelligence is a natural progression for Rudess\u2019 interest in music technology. \u201cThis\u00a0is the next step,\u201d he believes. When he discusses the work with fellow musicians, however, his enthusiasm for AI often meets with resistance. \u201cI can have sympathy or compassion for a musician who feels threatened, I totally get that,\u201d he allows. \u201cBut my mission is to be one of the people who moves this technology toward positive things.\u201d<\/p>\n<p dir=\"ltr\">\u201cAt the Media Lab, it\u2019s so important to think about how AI and humans come together for the benefit of all,\u201d says Paradiso. \u201cHow is AI going to lift us all up? Ideally it will do what so many technologies have done \u2014 bring us into another vista where we\u2019re more enabled.\u201d<\/p>\n<p dir=\"ltr\">\u201cJordan is ahead of the pack,\u201d Paradiso adds. \u201cOnce it\u2019s established with him, people will follow.\u201d<\/p>\n<p dir=\"ltr\"><strong>Jamming with MIT<\/strong><\/p>\n<p dir=\"ltr\">The Media Lab first landed on Rudess\u2019 radar before his residency because he wanted to try out the Knitted Keyboard created by another member of Responsive Environments, textile researcher Irmandy Wickasono PhD \u201924. From that moment on, \u201cIt&#8217;s been a discovery for me, learning about the cool things that are going on at MIT in the music world,\u201d Rudess says.<\/p>\n<p dir=\"ltr\">During two visits to Cambridge last spring (assisted by his wife, theater and music producer Danielle Rudess), Rudess reviewed final projects in Paradiso\u2019s course on electronic music controllers, the syllabus for which included videos of his own past performances. He brought a new gesture-driven synthesizer called Osmose to a class on interactive music systems taught by Egozy, whose credits include the co-creation of the video game \u201cGuitar Hero.\u201d Rudess also provided tips on improvisation to a composition class; played GeoShred, a touchscreen musical instrument he co-created with Stanford University researchers, with student musicians in the MIT Laptop Ensemble and Arts Scholars program; and experienced immersive audio in the MIT Spatial Sound Lab. During his most recent trip to campus in September, he taught a masterclass for pianists in MIT\u2019s Emerson\/Harris Program, which provides a total of 67 scholars and fellows with support for conservatory-level musical instruction.<\/p>\n<p dir=\"ltr\">\u201cI get a kind of rush whenever I come to the university,\u201d Rudess says. \u201cI feel the sense that, wow, all of my musical ideas and inspiration and interests have come together in this really cool way.\u201d<\/p>\n<\/div>\n<p><a href=\"https:\/\/news.mit.edu\/2024\/model-virtuosity-jordan-rudess-jam-bot-1119\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Nicole Estvanik Taylor | Arts at MIT A crowd gathered at the MIT Media Lab in September for a concert by musician Jordan Rudess [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2024\/11\/19\/a-model-of-virtuosity\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":461,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/7752"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=7752"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/7752\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/456"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=7752"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=7752"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=7752"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}