{"id":2745,"date":"2019-10-28T06:32:46","date_gmt":"2019-10-28T06:32:46","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2019\/10\/28\/understanding-the-applications-of-probability-in-machine-learning\/"},"modified":"2019-10-28T06:32:46","modified_gmt":"2019-10-28T06:32:46","slug":"understanding-the-applications-of-probability-in-machine-learning","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2019\/10\/28\/understanding-the-applications-of-probability-in-machine-learning\/","title":{"rendered":"Understanding the applications of Probability in Machine Learning"},"content":{"rendered":"<p>Author: ajit jaokar<\/p>\n<div>\n<p><a href=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/3683069750?profile=original\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" src=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/3683069750?profile=RESIZE_710x\" class=\"align-full\"><\/a><\/p>\n<p>\u00a0<\/p>\n<p>This post is part of my forthcoming book <strong>The Mathematical Foundations of Data Science<\/strong>. Probability is one of the foundations of machine learning (along with linear algebra and optimization). In this post, we discuss the areas where probability theory could apply in machine learning applications. If you want to know more about the book, follow me on <a href=\"https:\/\/www.linkedin.com\/in\/ajitjaokar\/\">Ajit Jaokar linked<\/a> \u00a0<\/p>\n<p>\u00a0<\/p>\n<h2>Background<\/h2>\n<p>\u00a0<\/p>\n<p>First, we explore some background behind probability theory<\/p>\n<h3>Probability as a measure of uncertainty<\/h3>\n<p>\u00a0<\/p>\n<p>Probability is a measure of uncertainty. Probability applies to machine learning because in the real world, we need to make decisions with incomplete information. Hence, we need a mechanism to quantify uncertainty \u2013 which Probability provides us. Using probability, we can model elements of uncertainty such as risk in financial transactions and many other business processes. In contrast, in traditional programming, we work with deterministic problems i.e. the solution is not affected by uncertainty.\u00a0<\/p>\n<h3>Probability of an event<\/h3>\n<p>Probability quantifies the likelihood or belief that an event will occur. Probability theory has three important concepts: <strong>Even<\/strong>t &#8211; an outcome to which a probability is assigned; The <strong>Sample Space<\/strong> which represents the set of possible outcomes for the events and the <strong>Probability Function<\/strong> which maps a probability to an event. The probability function indicates the likelihood that the event being a part of the sample space is drawn. The <strong>probability distribution<\/strong> represents the shape or distribution of all events in the sample space. The probability of an event can be calculated directly by counting all the occurrences of the event and dividing them by the total possible outcomes of the event. Probability is a fractional value and has a value in the range between 0 and 1, where 0 indicates no probability and 1 represents full probability. \u00a0<\/p>\n<h3>Two Schools of Probability<\/h3>\n<p>There are two ways of interpreting probability: frequentist <strong>probability<\/strong> which considers the actual likelihood of an event and the <strong>Bayesian probability<\/strong> which considers how strongly we believe that an event will occur. frequentist probability includes techniques like <strong>p-values<\/strong> and <strong>confidence intervals<\/strong> used in statistical inference and <strong>maximum likelihood<\/strong> estimation for <strong>parameter estimation<\/strong>.<\/p>\n<p>\u00a0<\/p>\n<p>Frequentist techniques are based on counts and Bayesian techniques are based on beliefs. In the Bayesian approach, probabilities are assigned to events based on evidence and personal belief. The Bayesian techniques are based on the Bayes\u2019 theorem. Bayseian analysis can be used to model events that have not occurred before or occur infrequently. In contrast, frequentist techniques are based on <strong>sampling<\/strong> \u2013 hence the frequency of occurrence of an event. \u00a0For example, the <a href=\"https:\/\/www.investopedia.com\/terms\/p\/p-value.asp\">p\u00adValue<\/a> indicates a number between 0 and 1. The larger the p-value \u2013 the more the data conforms to the null hypothesis. The smaller the p-value, the more the data conforms to the alternate hypothesis. If p-value is less than 0.05, then we reject the null hypothesis i.e. accept the alternate hypothesis.<\/p>\n<p>\u00a0<\/p>\n<h2>Applications<\/h2>\n<p>With this background, let us explore how probability can apply to machine learning<\/p>\n<p>\u00a0<\/p>\n<h3>Sampling &#8211; Dealing with non-deterministic processes<\/h3>\n<p>Probability forms the basis of sampling. In machine learning, uncertainty can arise in many ways \u2013 for example &#8211; noise in data. Probability provides a set of tools to model uncertainty. Noise could arise due to variability in the observations, as a measurement error or from other sources. Noise effects both inputs and outputs. \u00a0<\/p>\n<p>Apart from noise in the sample data, we should also cater for the effects of bias. Even when the observations are uniformly sampled i.e. no bias is assumed in the sampling \u2013 other limitations can introduce bias. For example, if we choose a set of participants from a specific region of the country., by definition. the sample is biased to that region. We could expand the sample scope and variance in the data by including more regions in the country. We need to balance the variance and the bias so that the sample chosen is representative of the task we are trying to model.<\/p>\n<p>Typically, we are given a dataset i.e. we do not have control on the creation and sampling process of the dataset. To cater for this lack of control over sampling, we split the data into train and test sets or we use resampling techniques. <strong><em>Hence, probability (through sampling) is involved when we have incomplete coverage of the problem domain.<\/em><\/strong> \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<\/p>\n<p>\u00a0<\/p>\n<h3>Pattern recognition<\/h3>\n<p>Pattern recognition is a key part of machine learning. We can approach machine learning as a pattern recognition problem from a Bayesian standpoint. In <a href=\"https:\/\/www.amazon.com\/Pattern-Recognition-Learning-Information-Statistics\/dp\/0387310738\">Pattern Recognition<\/a> \u2013 Christopher Bishop takes a Bayesian view and presents approximate inference algorithms for situations where exact answers are not feasible. For the same reasons listed above, Probability theory is a key part of pattern recognition because it helps to cater for noise \/ uncertainty and for the finite size of the sample and also to apply Bayesian principles to machine learning.<\/p>\n<p>\u00a0<\/p>\n<p><strong>\u00a0<\/strong><\/p>\n<h3>Training &#8211; \u00a0use in Maximum likelihood estimation<\/h3>\n<p>\u00a0<\/p>\n<p>Many iterative machine learning techniques like <a href=\"https:\/\/towardsdatascience.com\/probability-concepts-explained-maximum-likelihood-estimation-c7b4342fdbb1\">Maximum likelihood estimation<\/a> (MLE) are based on probability theory. MLE is used for training in models like linear regression, logistic regression and artificial neural networks.<\/p>\n<p>\u00a0<\/p>\n<h3>Developing specific algorithms<\/h3>\n<p>Probability forms the basis of specific algorithms like <a href=\"https:\/\/en.wikipedia.org\/wiki\/Naive_Bayes_classifier\">Naive Bayes classifier<\/a> \u00a0<\/p>\n<p>\u00a0<\/p>\n<h3>Hyperparameter optimization<\/h3>\n<p>In machine learning models such as neural networks, hyperparameters are tuned through techniques like grid search. Bayesian optimization can be also used for hyperparameter optimization.<\/p>\n<p>\u00a0<\/p>\n<h3>Model evaluation<\/h3>\n<p>In binary classification tasks, we predict a single probability score. Model evaluation techniques require us to summarize the performance of a model based on predicted probabilities. For example \u2013 aggregation measures like <a href=\"http:\/\/wiki.fast.ai\/index.php\/Log_Loss\">log loss<\/a> require the understanding of probability theory<\/p>\n<p>\u00a0<\/p>\n<p><strong>Applied fields of study<\/strong><\/p>\n<p>Probability forms the foundation of many fields such as physics, biology, and computer science where maths is applied.<\/p>\n<p>\u00a0<\/p>\n<p><strong>Inference<\/strong><\/p>\n<p>Probability is a key part of inference &#8211; MLE for frequentist and Bayesian inference for Bayesian \u00a0<\/p>\n<\/p>\n<h2>Conclusion<\/h2>\n<\/p>\n<p>As we see above, there are many areas of machine learning where probability concepts apply. Yet, they are not so commonly taught in typical coding programs on machine learning. In the last blog, we discussed this trend in context of <a href=\"https:\/\/www.datasciencecentral.com\/profiles\/blogs\/correlation-does-not-equal-causation-but-how-exactly-do-you\">correlation vs causation.<\/a> I suspect the same is true i.e. the starting point for most developers is a dataset which they are already provided. In contrast, if you conduct a PhD experiment \/ thesis \u2013 you have to typically build your experiment from scratch. \u00a0<\/p>\n<p>\u00a0<\/p>\n<p>If you want to know more about the book, follow me on <a href=\"https:\/\/www.linkedin.com\/in\/ajitjaokar\/\">Ajit Jaokar linked<\/a> \u00a0<\/p>\n<p>\u00a0<\/p>\n<p>\u00a0<\/p>\n<p>Image source <a href=\"https:\/\/en.wikipedia.org\/wiki\/Dice\">Dice<\/a><\/p>\n<\/div>\n<p><a href=\"https:\/\/www.datasciencecentral.com\/xn\/detail\/6448529:BlogPost:903024\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: ajit jaokar \u00a0 This post is part of my forthcoming book The Mathematical Foundations of Data Science. Probability is one of the foundations of [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2019\/10\/28\/understanding-the-applications-of-probability-in-machine-learning\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":467,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[26],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/2745"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=2745"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/2745\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/459"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=2745"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=2745"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=2745"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}