{"id":5249,"date":"2021-12-01T01:00:00","date_gmt":"2021-12-01T01:00:00","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2021\/12\/01\/machine-learning-to-make-sign-language-more-accessible\/"},"modified":"2021-12-01T01:00:00","modified_gmt":"2021-12-01T01:00:00","slug":"machine-learning-to-make-sign-language-more-accessible","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2021\/12\/01\/machine-learning-to-make-sign-language-more-accessible\/","title":{"rendered":"Machine learning to make sign language more accessible"},"content":{"rendered":"<p>Author: <\/p>\n<div>\n<div class=\"block-paragraph\">\n<div class=\"rich-text\">\n<p>Google has spent over twenty years helping to make information accessible and useful in more than 150 languages. And our work is definitely not done, because the internet changes so quickly. About 15% of searches we see are entirely new every day. And when it comes to other types of information beyond words, in many ways, technology hasn\u2019t even begun to scratch the surface of what\u2019s possible. Take one example: sign language.<\/p>\n<p>The task is daunting. There are as many sign languages as there are spoken languages around the world. That\u2019s why, when we started exploring how we could better support sign language, we started small by researching and experimenting with what machine learning models could recognize. We also spoke with members of the Deaf community, as well as linguistic experts. We began combining several ML models to recognize sign language as a sum of its parts \u2014 going beyond just hands to include body gestures and facial expressions.<\/p>\n<p>After 14 months of testing with a database of videos for Japanese Sign Language and Hong Kong Sign Language, we launched <a href=\"https:\/\/sign.town\/\">SignTown<\/a>: an interactive desktop application that works with a web browser and camera.<\/p>\n<p>SignTown is an interactive web game built to help people to learn about sign language and Deaf culture. It uses machine learning to detect the user&#8217;s ability to perform signs learned from the game.<\/p>\n<\/p>\n<h3><b>Project Shuwa<\/b><\/h3>\n<p>SignTown is only one component of a broader effort to push the boundaries of technology for sign language and Deaf culture, named \u201c<a href=\"https:\/\/projectshuwa.com\/\">Project Shuwa<\/a>\u201d after the Japanese word for sign language (\u201c\u624b\u8a71\u201d). Future areas of development we\u2019re exploring include building a more comprehensive dictionary across more sign and written languages, as well as collaborating with the Google Search team on surfacing these results to improve search quality for sign languages.<\/p>\n<\/div>\n<\/div>\n<div class=\"block-image_full_width\">\n<div class=\"h-c-page\">\n<div class=\" article-image h-c-grid__col-l--6 h-c-grid__col--8 h-c-grid__col-l--offset-3 h-c-grid__col--offset-2 \"><img decoding=\"async\" alt=\"A woman in a black top facing the camera and making a sign with her right hand.\" class=\"article-image--large\" src=\"https:\/\/storage.googleapis.com\/gweb-uniblog-publish-prod\/images\/unnamed_7.max-1000x1000.png\" tabindex=\"0\"><\/div>\n<\/div>\n<\/div>\n<div class=\"block-paragraph\">\n<div class=\"rich-text\">\n<p>Advances in AI and ML now allow us to reliably detect hands, body poses and facial expressions using any camera inside a laptop or mobile phone. SignTown uses the <a href=\"https:\/\/google.github.io\/mediapipe\/solutions\/holistic\">MediaPipe Holistic model<\/a> to identify keypoints from raw video frames, which we then feed into a classifier model to determine which sign is the closest match. This all runs inside of the user&#8217;s browser, powered by <a href=\"https:\/\/www.tensorflow.org\/js\">Tensorflow.js<\/a>.<\/p>\n<\/div>\n<\/div>\n<div class=\"block-image_full_width\">\n<div class=\"h-c-page\">\n<div class=\" article-image h-c-grid__col-l--6 h-c-grid__col--8 h-c-grid__col-l--offset-3 h-c-grid__col--offset-2 \"><img decoding=\"async\" alt=\"A grid with separate images of four people facing the camera and making signs with their hands.\" class=\"article-image--large\" src=\"https:\/\/storage.googleapis.com\/gweb-uniblog-publish-prod\/images\/unnamed_8_EEDFnfH.max-1000x1000.png\" tabindex=\"0\"><\/div>\n<\/div>\n<\/div>\n<div class=\"block-paragraph\">\n<div class=\"rich-text\">\n<p>We <a href=\"https:\/\/github.com\/google\/shuwa\">open-sourced<\/a> the core models and tools for developers and researchers to build their own custom models at Google IO 2021. That means anyone who wants to train and deploy their own sign language model has the ability to do so.<\/p>\n<p>At Google, we strive to help build a more accessible world for people with disabilities through technology. Our progress depends on collaborating with the right partners and developers to shape experiments that may one day become stand-alone tools. But it\u2019s equally important that we raise awareness in the wider community to foster diversity and inclusivity. We hope our work in this area with SignTown gets us a little closer to that goal.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<p><a href=\"https:\/\/blog.google\/outreach-initiatives\/accessibility\/ml-making-sign-language-more-accessible\/\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Google has spent over twenty years helping to make information accessible and useful in more than 150 languages. And our work is definitely not [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2021\/12\/01\/machine-learning-to-make-sign-language-more-accessible\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":5250,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/5249"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=5249"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/5249\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/5250"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=5249"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=5249"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=5249"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}