{"id":2653,"date":"2019-10-04T06:34:42","date_gmt":"2019-10-04T06:34:42","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2019\/10\/04\/free-book-a-comprehensive-guide-to-machine-learning-berkeley-university\/"},"modified":"2019-10-04T06:34:42","modified_gmt":"2019-10-04T06:34:42","slug":"free-book-a-comprehensive-guide-to-machine-learning-berkeley-university","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2019\/10\/04\/free-book-a-comprehensive-guide-to-machine-learning-berkeley-university\/","title":{"rendered":"Free Book: A Comprehensive Guide to Machine Learning (Berkeley University)"},"content":{"rendered":"<p>Author: Capri Granville<\/p>\n<div>\n<p>By Soroush Nasiriany, Garrett Thomas, William Wang, Alex Yang. Department of Electrical Engineering and Computer Sciences, University of California, Berkeley. Dated June 24, 2019. This is not the same book as\u00a0<a href=\"https:\/\/www.datasciencecentral.com\/profiles\/blogs\/tutorial-the-math-of-machine-learning-berkeley-university\" target=\"_blank\" rel=\"noopener noreferrer\">The Math of Machine Learning<\/a>, also published by the same department at Berkeley, in 2018, and also authored by Garret Thomas.<\/p>\n<p><a href=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/3642816701?profile=original\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" src=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/3642816701?profile=RESIZE_710x\" class=\"align-center\"><\/a><\/p>\n<p style=\"text-align: center;\"><em>Source: this book, page 21<\/em><\/p>\n<p><span style=\"font-size: 14pt;\"><strong>Content<\/strong><\/span><\/p>\n<p><strong>1 Regression I 5<\/strong><\/p>\n<ul>\n<li>Ordinary Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5<\/li>\n<li>Ridge Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8<\/li>\n<li>Feature Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11<\/li>\n<li>Hyperparameters and Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12<\/li>\n<\/ul>\n<p><strong>2 Regression II 17<\/strong><\/p>\n<ul>\n<li>MLE and MAP for Regression (Part I) . . . . . . . . . . . . . . . . . . . . . . . . . . 17<\/li>\n<li>Bias-Variance Tradeoff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23<\/li>\n<li>Multivariate Gaussians . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30<\/li>\n<li>MLE and MAP for Regression (Part II) . . . . . . . . . . . . . . . . . . . . . . . . . 37<\/li>\n<li>Kernels and Ridge Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44<\/li>\n<li>Sparse Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50<\/li>\n<li>Total Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57<\/li>\n<\/ul>\n<p><strong>3 Dimensionality Reduction 63<\/strong><\/p>\n<ul>\n<li>Principal Component Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63<\/li>\n<li>Canonical Correlation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70<\/li>\n<\/ul>\n<p><strong>4 Beyond Least Squares: Optimization and Neural Networks 79<\/strong><\/p>\n<ul>\n<li>Nonlinear Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79<\/li>\n<li>Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81<\/li>\n<li>Gradient Descent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82<\/li>\n<li>Line Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88<\/li>\n<li>Convex Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89<\/li>\n<li>Newton\u2019s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93<\/li>\n<li>Gauss-Newton Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96<\/li>\n<li>Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97<\/li>\n<li>Training Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103<\/li>\n<\/ul>\n<p><strong>5 Classification 107<\/strong><\/p>\n<ul>\n<li>Generative vs. Discriminative Classification . . . . . . . . . . . . . . . . . . . . . . . 107<\/li>\n<li>Least Squares Support Vector Machine . . . . . . . . . . . . . . . . . . . . . . . . . . 109<\/li>\n<li>Logistic Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113<\/li>\n<li>Gaussian Discriminant Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121<\/li>\n<li>Support Vector Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127<\/li>\n<li>Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134<\/li>\n<li>Nearest Neighbor Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145<\/li>\n<\/ul>\n<p><strong>6 Clustering 151<\/strong><\/p>\n<ul>\n<li>K-means Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152<\/li>\n<li>Mixture of Gaussians . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155<\/li>\n<li>Expectation Maximization (EM) Algorithm . . . . . . . . . . . . . . . . . . . . . . . 156<\/li>\n<\/ul>\n<p><strong>7 Decision Tree Learning 163<\/strong><\/p>\n<ul>\n<li>Decision Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163<\/li>\n<li>Random Forests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168<\/li>\n<li>Boosting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169<\/li>\n<\/ul>\n<p><strong>8 Deep Learning 175<\/strong><\/p>\n<ul>\n<li>Convolutional Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175<\/li>\n<li>CNN Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182<\/li>\n<li>Visualizing and Understanding CNNs . . . . . . . . . . . . . . . . . . . . . . . . . . 185<\/li>\n<\/ul>\n<p>I hope they will add sections on Ensemble Methods (combining multiple techniques), cross-validation, and feature selection, and then it will cover pretty much everything that the beginner should know. You can download this material (PDF document) <a href=\"https:\/\/www.eecs189.org\/static\/resources\/comprehensive-guide.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">here<\/a>.<\/p>\n<p><span style=\"font-size: 14pt;\"><strong>Related Books<\/strong><\/span><\/p>\n<p>Other popular free books, all written by top experts in their fields, include\u00a0<a href=\"https:\/\/www.datasciencecentral.com\/profiles\/blogs\/new-book-foundations-of-data-science-from-microsoft-research-lab\" target=\"_blank\" rel=\"noopener noreferrer\">Foundations of Data Science<\/a>\u00a0published by Microsoft&#8217;s ML Research Lab in 2018, and\u00a0<a href=\"https:\/\/www.datasciencecentral.com\/profiles\/blogs\/free-book-statistics-new-foundations-toolbox-and-machine-learning\" target=\"_blank\" rel=\"noopener noreferrer\">Statistics: New Foundations, Toolbox, and Machine Learning Recipes<\/a>\u00a0published by Data Science Central in 2019. Other free DSC books (more to come very soon) include:<\/p>\n<ul>\n<li><a href=\"https:\/\/www.datasciencecentral.com\/profiles\/blogs\/free-book-classification-and-regression-in-a-weekend\" target=\"_blank\" rel=\"noopener noreferrer\">Classification and Regression In a Weekend<\/a><\/li>\n<li><a href=\"https:\/\/www.datasciencecentral.com\/profiles\/blogs\/online-encyclopedia-of-statistical-science-free-1\" target=\"_blank\" rel=\"noopener noreferrer\">Encyclopedia of Statistical Science<\/a><\/li>\n<li><a href=\"https:\/\/www.datasciencecentral.com\/profiles\/blogs\/free-book-azure-machine-learning-in-a-weekend\" target=\"_blank\" rel=\"noopener noreferrer\">Azure Machine Learning in a Weekend<\/a><\/li>\n<li><a href=\"https:\/\/www.datasciencecentral.com\/profiles\/blogs\/free-ebook-enterprise-ai-an-applications-perspective\" target=\"_blank\" rel=\"noopener noreferrer\">Enterprise AI &#8211; An Application Perspective<\/a><\/li>\n<li><a href=\"https:\/\/www.datasciencecentral.com\/profiles\/blogs\/fee-book-applied-stochastic-processes\" target=\"_blank\" rel=\"noopener noreferrer\">Applied Stochastic Processes<\/a><\/li>\n<\/ul>\n<\/div>\n<p><a href=\"https:\/\/www.datasciencecentral.com\/xn\/detail\/6448529:BlogPost:893757\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Capri Granville By Soroush Nasiriany, Garrett Thomas, William Wang, Alex Yang. Department of Electrical Engineering and Computer Sciences, University of California, Berkeley. Dated June [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2019\/10\/04\/free-book-a-comprehensive-guide-to-machine-learning-berkeley-university\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":471,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[26],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/2653"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=2653"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/2653\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/458"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=2653"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=2653"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=2653"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}