{"id":1339,"date":"2018-11-29T05:00:00","date_gmt":"2018-11-29T05:00:00","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2018\/11\/29\/reproducing-paintings-that-make-an-impression\/"},"modified":"2018-11-29T05:00:00","modified_gmt":"2018-11-29T05:00:00","slug":"reproducing-paintings-that-make-an-impression","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2018\/11\/29\/reproducing-paintings-that-make-an-impression\/","title":{"rendered":"Reproducing paintings that make an impression"},"content":{"rendered":"<p>Author: Rachel Gordon | CSAIL<\/p>\n<div>\n<p>The empty frames hanging inside the Isabella Stewart Gardner Museum serve as a tangible reminder of the world\u2019s biggest unsolved art heist. While\u00a0the original masterpieces may never be recovered, a team from MIT\u2019s Computer Science and Artificial Intelligence Laboratory (CSAIL) might be able to help, with a new system aimed at designing reproductions of paintings.<\/p>\n<p>RePaint uses a combination of 3-D printing and deep learning to authentically recreate\u00a0favorite paintings \u2014\u00a0regardless of different lighting conditions or placement. RePaint could be used to remake artwork for a\u00a0home, protect originals from wear and tear in museums, or even help companies create prints and postcards of historical pieces.<\/p>\n<p>\u201cIf you just reproduce the color of a painting as it looks in the gallery, it might look different in your home,\u201d says Changil Kim, one of the authors on a new paper about the system, which will be presented at ACM SIGGRAPH Asia in December. \u201cOur system works under any lighting condition, which shows a far greater color reproduction capability than almost any other previous work.\u201d<\/p>\n<div class=\"cms-placeholder-content-video\"><\/div>\n<p>To test RePaint, the team reproduced a number of oil paintings created by an artist collaborator. The team found that RePaint was more than four times more accurate than state-of-the-art physical models at creating the exact color shades for different artworks.<\/p>\n<p>At this time the reproductions are only about the size of a business card, due to the time-costly nature of printing. In the future the team expects that more advanced, commercial 3-D printers could help with making larger paintings more efficiently.<\/p>\n<p>While 2-D printers are most commonly used for reproducing paintings, they have a fixed set of just four inks (cyan, magenta, yellow, and black). The researchers, however, found a better way to capture a fuller spectrum of Degas and Dali. They used a special technique they\u00a0call\u00a0\u201ccolor-contoning,\u201d\u00a0which involves using a 3-D printer and 10 different transparent inks stacked in very thin layers, much like the wafers and chocolate in a Kit-Kat bar. They combined their method with a decades-old technique called half-toning, where an image is created by lots\u00a0of little colored\u00a0dots rather than continuous tones. Combining these, the team says, better captured the nuances of the colors.<\/p>\n<p>With a larger color scope to work with, the question of what inks to use for which paintings still remained. Instead of using more laborious physical approaches, the team trained a deep-learning model to predict the optimal stack of different inks. Once the system had a handle on that, they fed in images of paintings and used the model to determine what colors should be used in what particular areas for specific paintings.<\/p>\n<p>Despite the progress so far, the team says they have a few improvements to make before they can whip up a dazzling duplicate\u00a0of \u201cStarry Night.\u201d For example, mechanical engineer Mike Foshey said they couldn\u2019t completely reproduce certain colors like cobalt blue due to a limited ink library. In the future they plan to expand this library, as well as create a painting-specific algorithm for selecting inks, he says. They also can hope to achieve better detail to account for aspects like surface texture and reflection, so that they can achieve specific effects such as glossy and matte finishes.<\/p>\n<p>\u201cThe value of fine art has rapidly increased in recent years, so there\u2019s an increased tendency for it to be locked up in warehouses away from the public eye,\u201d says Foshey. \u201cWe\u2019re building the technology to reverse this trend, and to create inexpensive and accurate reproductions that can be enjoyed by all.\u201d<\/p>\n<p>Kim and Foshey worked on the system alongside lead author Liang Shi; MIT professor Wojciech Matusik;\u00a0former MIT postdoc Vahid Babaei, now Group Leader at Max Planck Institute of Informatics; Princeton University computer science professor Szymon Rusinkiewicz;\u00a0and former MIT postdoc Pitchaya Sitthi-Amorn, who is now a lecturer at Chulalongkorn University in Bangkok, Thailand.<\/p>\n<p>This work is supported in part by the National Science Foundation.<\/p>\n<\/div>\n<p><a href=\"http:\/\/news.mit.edu\/2018\/mit-csail-repaint-system-reproducing-paintings-make-impression-1129\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Rachel Gordon | CSAIL The empty frames hanging inside the Isabella Stewart Gardner Museum serve as a tangible reminder of the world\u2019s biggest unsolved [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2018\/11\/29\/reproducing-paintings-that-make-an-impression\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":469,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/1339"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=1339"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/1339\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/471"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=1339"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=1339"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=1339"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}