{"id":6284,"date":"2023-02-10T18:00:00","date_gmt":"2023-02-10T18:00:00","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2023\/02\/10\/helping-companies-deploy-ai-models-more-responsibly\/"},"modified":"2023-02-10T18:00:00","modified_gmt":"2023-02-10T18:00:00","slug":"helping-companies-deploy-ai-models-more-responsibly","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2023\/02\/10\/helping-companies-deploy-ai-models-more-responsibly\/","title":{"rendered":"Helping companies deploy AI models more responsibly"},"content":{"rendered":"<p>Author: Zach Winn | MIT News Office<\/p>\n<div>\n<p>Companies today are incorporating artificial intelligence into every corner of their business. The trend is expected to continue until machine-learning models are incorporated into most of the products and services we interact with every day.<\/p>\n<p>As those models become a bigger part of our lives, ensuring their integrity becomes more important. That\u2019s the mission of Verta, a startup that spun out of MIT\u2019s Computer Science and Artificial Intelligence Laboratory (CSAIL).<\/p>\n<p>Verta\u2019s platform helps companies deploy, monitor, and manage machine-learning models safely and at scale. Data scientists and engineers can use Verta\u2019s tools to track different versions of models, audit them for bias, test them before deployment, and monitor their performance in the real world.<\/p>\n<p>\u201cEverything we do is to enable more products to be built with AI, and to do that safely,\u201d Verta founder and CEO Manasi Vartak SM \u201914, PhD \u201918 says. \u201cWe\u2019re already seeing with ChatGPT how AI can be used to generate data, artefacts \u2014 you name it \u2014 that look correct but aren\u2019t correct. There needs to be more governance and control in how AI is being used<strong>, <\/strong>particularly for enterprises providing AI solutions.\u201d<\/p>\n<p>Verta is currently working with large companies in health care, finance, and insurance to help them understand and audit their models\u2019 recommendations and predictions. It\u2019s also working with a number of high-growth tech companies looking to speed up deployment of new, AI-enabled solutions while ensuring those solutions are used appropriately.<\/p>\n<p>Vartak says the company has been able to decrease the time it takes customers to deploy AI models by orders of magnitude while ensuring those models are explainable and fair \u2014 an especially important factor for companies in highly regulated industries.<\/p>\n<p>Health care companies, for example, can use Verta to improve AI-powered patient monitoring and treatment recommendations. Such systems need to be thoroughly vetted for errors and biases before they\u2019re used on patients.<\/p>\n<p>\u201cWhether it\u2019s bias or fairness or explainability, it goes back to our philosophy on model governance and management,\u201d Vartak says. \u201cWe think of it like a preflight checklist: Before an airplane takes off, there\u2019s a set of checks you need to do before you get your airplane off the ground. It\u2019s similar with AI models. You need to make sure you\u2019ve done your bias checks, you need to make sure there\u2019s some level of explainability, you need to make sure your model is reproducible. We help with all of that.\u201d<\/p>\n<p><strong>From project to product<\/strong><\/p>\n<p>Before coming to MIT, Vartak worked as a data scientist for a social media company. In one project, after spending weeks tuning machine-learning models that curated content to show in people\u2019s feeds, she learned an ex-employee had already done the same thing. Unfortunately, there was no record of what they did or how it affected the models.<\/p>\n<p>For her PhD at MIT, Vartak decided to build tools to help data scientists develop, test, and iterate on machine-learning models. Working in CSAIL\u2019s Database Group, Vartak recruited a team of graduate students and participants in MIT\u2019s Undergraduate Research Opportunities Program (UROP).<\/p>\n<p>\u201cVerta would not exist without my work at MIT and MIT\u2019s ecosystem,\u201d Vartak says. \u201cMIT brings together people on the cutting edge of tech and helps us build the next generation of tools.\u201d<\/p>\n<p>The team worked with data scientists in the CSAIL Alliances program to decide what features to build and iterated based on feedback from those early adopters. Vartak says the resulting project, named ModelDB, was the first open-source model management system.<\/p>\n<p>Vartak also took several business classes at the MIT Sloan School of Management during her PhD and worked with classmates on startups that recommended clothing and tracked health, spending countless hours in the Martin Trust Center for MIT Entrepreneurship and participating in the center\u2019s delta v summer accelerator.<\/p>\n<p>\u201cWhat MIT lets you do is take risks and fail in a safe environment,\u201d Vartak says. \u201cMIT afforded me those forays into entrepreneurship and showed me how to go about building products and finding first customers, so by the time Verta came around I had done it on a smaller scale.\u201d<\/p>\n<p>ModelDB helped data scientists train and track models, but Vartak quickly saw the stakes were higher once models were deployed at scale. At that point, trying to improve (or accidentally breaking) models can have major implications for companies and society. That insight led Vartak to begin building Verta.<\/p>\n<p>\u201cAt Verta, we help manage models, help run models, and make sure they\u2019re working as expected, which we call model monitoring,\u201d Vartak explains. \u201cAll of those pieces have their roots back to MIT and my thesis work. Verta really evolved from my PhD project at MIT.\u201d<\/p>\n<p>Verta\u2019s platform helps companies deploy models more quickly, ensure they continue working as intended over time, and manage the models for compliance and governance. Data scientists can use Verta to track different versions of models and understand how they were built, answering questions like how data were used and which explainability or bias checks were run. They can also vet them by running them through deployment checklists and security scans.<\/p>\n<p>\u201cVerta\u2019s platform takes the data science model and adds half a dozen layers to it to transform it into something you can use to power, say, an entire recommendation system on your website,\u201d Vartak says. \u201cThat includes performance optimizations, scaling, and cycle time, which is how quickly you can take a model and turn it into a valuable product, as well as governance.\u201d<\/p>\n<p><strong>Supporting the AI wave<\/strong><\/p>\n<p>Vartak says large companies often use thousands of different models that influence nearly every part of their operations.<\/p>\n<p>\u201cAn insurance company, for example, will use models for everything from underwriting to claims, back-office processing, marketing, and sales,\u201d Vartak says. \u201cSo, the diversity of models is really high, there\u2019s a large volume of them, and the level of scrutiny and compliance companies need around these models are very high. They need to know things like: Did you use the data you were supposed to use? Who were the people who vetted it? Did you run explainability checks? Did you run bias checks?\u201d<\/p>\n<p>Vartak says companies that don\u2019t adopt AI will be left behind. The companies that ride AI to success, meanwhile, will need well-defined processes in place to manage their ever-growing list of models.<\/p>\n<p>\u201cIn the next 10 years, every device we interact with is going to have intelligence built in, whether it\u2019s a toaster or your email programs, and it\u2019s going to make your life much, much easier,\u201d Vartak says. \u201cWhat\u2019s going to enable that intelligence are better models and software, like Verta, that help you integrate AI into all of these applications very quickly.\u201d<\/p>\n<\/div>\n<p><a href=\"https:\/\/news.mit.edu\/2023\/verta-helping-companies-deploy-ai-models-0210\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Zach Winn | MIT News Office Companies today are incorporating artificial intelligence into every corner of their business. The trend is expected to continue [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2023\/02\/10\/helping-companies-deploy-ai-models-more-responsibly\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":456,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/6284"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=6284"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/6284\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/466"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=6284"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=6284"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=6284"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}