{"id":3840,"date":"2020-09-04T06:34:05","date_gmt":"2020-09-04T06:34:05","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2020\/09\/04\/free-online-book-machine-learning-from-scratch\/"},"modified":"2020-09-04T06:34:05","modified_gmt":"2020-09-04T06:34:05","slug":"free-online-book-machine-learning-from-scratch","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2020\/09\/04\/free-online-book-machine-learning-from-scratch\/","title":{"rendered":"Free online book &#8211; Machine Learning from Scratch"},"content":{"rendered":"<p>Author: Daniel Friedman<\/p>\n<div>\n<p>Hi all,<\/p>\n<\/p>\n<p>I&#8217;m writing to share a book I just published that I think many of you might find interesting or useful.&nbsp;<\/p>\n<\/p>\n<p>The book is called &#8220;Machine Learning from Scratch.&#8221; It<span>&nbsp;provides complete derivations of the most common algorithms in ML (OLS, logistic regression, naive Bayes, trees, boosting, neural nets, etc.) both in theory and math. It also demonstrates constructions of each of these methods from scratch in Python using only numpy.<\/span><\/p>\n<p><span>My aim with the book is to provide a very thorough rundown of the fitting process behind the algorithms we see every day. I hope that seeing the models derived in math or constructed in code helps readers understand the models at a deeper level and feel more comfortable optimizing them for their own work.<\/span><\/p>\n<\/p>\n<p>Any comments or questions would be very much appreciated either on this post, on the book&#8217;s github, or to me directly at dafrdman@gmail.com.&nbsp;<\/p>\n<p><em>The book is available&nbsp;<a href=\"https:\/\/dafriedman97.github.io\/mlbook\/content\/introduction.html\" target=\"_self\" rel=\"noopener noreferrer\">here<\/a><\/em><\/p>\n<p><em><a href=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/7823234097?profile=original\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" src=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/7823234097?profile=RESIZE_710x\" class=\"align-center\"><\/a><\/em><\/p>\n<div class=\"section\" id=\"what-this-book-covers\">\n<h2>What this Book Covers<\/h2>\n<p>This book covers the building blocks of the most common methods in machine learning. This set of methods is like a toolbox for machine learning engineers. Those entering the field of machine learning should feel comfortable with this toolbox so they have the right tool for a variety of tasks. Each chapter in this book corresponds to a single machine learning method or group of methods. In other words, each chapter focuses on a single tool within the ML toolbox.<\/p>\n<p>In my experience, the best way to become comfortable with these methods is to see them derived from scratch, both in theory and in code. The purpose of this book is to provide those derivations. Each chapter is broken into three sections. The<span>&nbsp;<\/span><em>concept<\/em><span>&nbsp;<\/span>sections introduce the methods conceptually and derive their results mathematically. The<span>&nbsp;<\/span><em>construction<\/em><span>&nbsp;<\/span>sections show how to construct the methods from scratch using Python. The<span>&nbsp;<\/span><em>implementation<\/em><span>&nbsp;<\/span>sections demonstrate how to apply the methods using packages in Python like<span>&nbsp;<\/span><code class=\"docutils literal notranslate\"><span class=\"pre\">scikit-learn<\/span><\/code>,<span>&nbsp;<\/span><code class=\"docutils literal notranslate\"><span class=\"pre\">statsmodels<\/span><\/code>, and<span>&nbsp;<\/span><code class=\"docutils literal notranslate\"><span class=\"pre\">tensorflow<\/span><\/code>.<\/p>\n<\/div>\n<div class=\"section\" id=\"why-this-book\">\n<h2>Why this Book<\/h2>\n<p>There are many great books on machine learning written by more knowledgeable authors and covering a broader range of topics. In particular, I would suggest<span>&nbsp;<\/span><a class=\"reference external\" href=\"http:\/\/faculty.marshall.usc.edu\/gareth-james\/ISL\/\">An Introduction to Statistical Learning<\/a>,<span>&nbsp;<\/span><a class=\"reference external\" href=\"https:\/\/web.stanford.edu\/~hastie\/ElemStatLearn\/\">Elements of Statistical Learning<\/a>, and<span>&nbsp;<\/span><a class=\"reference external\" href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/pattern-recognition-machine-learning\/\">Pattern Recognition and Machine Learning<\/a>, all of which are available online for free.<\/p>\n<p>While those books provide a conceptual overview of machine learning and the theory behind its methods, this book focuses on the bare bones of machine learning algorithms. Its main purpose is to provide readers with the ability to construct these algorithms independently. Continuing the toolbox analogy, this book is intended as a user guide: it is not designed to teach users broad practices of the field but rather how each tool works at a micro level.<\/p>\n<\/div>\n<div class=\"section\" id=\"who-this-book-is-for\">\n<h2>Who this Book is for<\/h2>\n<p>This book is for readers looking to learn new machine learning algorithms or understand algorithms at a deeper level. Specifically, it is intended for readers interested in seeing machine learning algorithms derived from start to finish. Seeing these derivations might help a reader previously unfamiliar with common algorithms understand how they work intuitively. Or, seeing these derivations might help a reader experienced in modeling understand how different algorithms create the models they do and the advantages and disadvantages of each one.<\/p>\n<p>This book will be most helpful for those with practice in basic modeling. It does not review best practices&mdash;such as feature engineering or balancing response variables&mdash;or discuss in depth when certain models are more appropriate than others. Instead, it focuses on the elements of those models.<\/p>\n<\/div>\n<div class=\"section\" id=\"what-readers-should-know\">\n<h2>What Readers Should Know<\/h2>\n<p>The<span>&nbsp;<\/span><em>concept<\/em><span>&nbsp;<\/span>sections of this book primarily require knowledge of calculus, though some require an understanding of probability (think maximum likelihood and Bayes&rsquo; Rule) and basic linear algebra (think matrix operations and dot products). The appendix reviews the<span>&nbsp;<\/span><a class=\"reference internal\" href=\"https:\/\/dafriedman97.github.io\/mlbook\/content\/appendix\/methods.html\"><span class=\"doc\">math<\/span><\/a><span>&nbsp;<\/span>and<span>&nbsp;<\/span><a class=\"reference internal\" href=\"https:\/\/dafriedman97.github.io\/mlbook\/content\/appendix\/methods.html\"><span class=\"doc\">probability<\/span><\/a> needed to understand this book. The concept sections also reference a few common machine learning<span>&nbsp;<\/span><a class=\"reference internal\" href=\"https:\/\/dafriedman97.github.io\/mlbook\/content\/appendix\/methods.html\"><span class=\"doc\">methods<\/span><\/a>, which are introduced in the appendix as well. The concept sections do not require any knowledge of programming.<\/p>\n<p>The<span>&nbsp;<\/span><em>construction<\/em><span>&nbsp;<\/span>and<span>&nbsp;<\/span><em>code<\/em><span>&nbsp;<\/span>sections of this book use some basic Python. The construction sections require understanding of the corresponding content sections and familiarity creating functions and classes in Python. The code sections require neither.<\/p>\n<\/div>\n<\/p>\n<\/div>\n<p><a href=\"https:\/\/www.datasciencecentral.com\/xn\/detail\/6448529:BlogPost:979616\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Daniel Friedman Hi all, I&#8217;m writing to share a book I just published that I think many of you might find interesting or useful.&nbsp; [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2020\/09\/04\/free-online-book-machine-learning-from-scratch\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":462,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[26],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/3840"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=3840"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/3840\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/464"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=3840"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=3840"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=3840"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}