{"id":1745,"date":"2019-02-19T18:00:34","date_gmt":"2019-02-19T18:00:34","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2019\/02\/19\/neural-networks-tricks-of-the-trade-review\/"},"modified":"2019-02-19T18:00:34","modified_gmt":"2019-02-19T18:00:34","slug":"neural-networks-tricks-of-the-trade-review","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2019\/02\/19\/neural-networks-tricks-of-the-trade-review\/","title":{"rendered":"Neural Networks: Tricks of the Trade Review"},"content":{"rendered":"<p>Author: Jason Brownlee<\/p>\n<div>\n<p>Deep learning neural networks are challenging to configure and train.<\/p>\n<p>There are decades of tips and tricks spread across hundreds of research papers, source code, and in the heads of academics and practitioners.<\/p>\n<p>The book \u201c<a href=\"https:\/\/amzn.to\/2zwUmNM\">Neural Networks: Tricks of the Trade<\/a>\u201d originally published in 1998 and updated in 2012 at the cusp of the deep learning renaissance ties together the disparate tips and tricks into a single volume. It includes advice that is required reading for all deep learning neural network practitioners.<\/p>\n<p>In this post, you will discover the book \u201c<em>Neural Networks: Tricks of the Trade<\/em>\u201d that provides advice by neural network academics and practitioners on how to get the most out of your models.<\/p>\n<p>After reading this post, you will know:<\/p>\n<ul>\n<li>The motivation for why the book was written.<\/li>\n<li>A breakdown of the chapters and topics in the first and second editions.<\/li>\n<li>A list and summary of the must-read chapters for every neural network practitioner.<\/li>\n<\/ul>\n<p>Let\u2019s get started.<\/p>\n<div id=\"attachment_7009\" style=\"width: 340px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/amzn.to\/2zwUmNM\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-7009 size-full\" src=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2018\/11\/Neural-Networks-Tricks-of-the-Trade.jpg\" alt=\"Neural Networks - Tricks of the Trade\" width=\"330\" height=\"499\" srcset=\"http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/11\/Neural-Networks-Tricks-of-the-Trade.jpg 330w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/11\/Neural-Networks-Tricks-of-the-Trade-198x300.jpg 198w\" sizes=\"(max-width: 330px) 100vw, 330px\"><\/a><\/p>\n<p class=\"wp-caption-text\">Neural Networks \u2013 Tricks of the Trade<\/p>\n<\/div>\n<h2>Overview<\/h2>\n<p>Neural Networks: Tricks of the Trade is a collection of papers on techniques to get better performance from neural network models.<\/p>\n<p>The <a href=\"https:\/\/amzn.to\/2SRyBBa\">first edition<\/a> was published in 1998 comprised of five parts and 17 chapters. The <a href=\"https:\/\/amzn.to\/2zwUmNM\">second edition<\/a> was published right on the cusp of the new deep learning renaissance in 2012 and includes three more parts and 13 new chapters.<\/p>\n<p>If you are a deep learning practitioner, then it is a must read book.<\/p>\n<p>I own and reference both editions.<\/p>\n<h2>Motivation<\/h2>\n<p>The motivation for the book was to collate the empirical and theoretically grounded tips, tricks, and best practices used to get the best performance from neural network models in practice.<\/p>\n<p>The author\u2019s concern is that many of the useful tips and tricks are tacit knowledge in the field, trapped in peoples heads, code bases, or at the end of conference papers and that beginners to the field should be aware of them.<\/p>\n<blockquote>\n<p>It is our belief that researchers and practitioners acquire, through experience and word-of-mouth, techniques and heuristics that help them successfully apply neural networks to difficult real-world problems. [\u2026] they are usually hidden in people\u2019s heads or in the back pages of space-constrained conference papers.<\/p>\n<\/blockquote>\n<p>The book is an effort to try to group the tricks together, after the success of a workshop at the 1996 NIPS conference with the same name.<\/p>\n<blockquote>\n<p>This book is an outgrowth of a 1996 NIPS workshop called Tricks of the Trade whose goal was to begin the process of gathering and documenting these tricks. The interest that the workshop generated motivated us to expand our collection and compile it into this book.<\/p>\n<\/blockquote>\n<p>\u2014 Page 1, <a href=\"https:\/\/amzn.to\/2zwUmNM\">Neural Networks: Tricks of the Trade<\/a>, Second Edition, 2012.<\/p>\n<h2>Breakdown of First Edition<\/h2>\n<p>The <a href=\"https:\/\/amzn.to\/2SRyBBa\">first edition<\/a> of the book was put together (edited) by Genevieve Orr and Klaus-Robert Muller comprised of five parts and 17 chapters and was published 20 years ago in 1998.<\/p>\n<p>Each part includes a useful preface that summarizes what to expect in the upcoming chapters, and each chapter written by one or more academics in the field.<\/p>\n<p>The breakdown of this first edition was as follows:<\/p>\n<h3>Part 1: Speeding Learning<\/h3>\n<ul>\n<li>Chapter 1: Efficient BackProp<\/li>\n<\/ul>\n<h3>Part 2: Regularization Techniques to Improve Generalization<\/h3>\n<ul>\n<li>Chapter 2: Early Stopping \u2013 But When?<\/li>\n<li>Chapter 3: A Simple Trick for Estimating the Weight Decay Parameter<\/li>\n<li>Chapter 4: Controlling the Hyperparameter Search on MacKay\u2019s Bayesian Neural Network Framework<\/li>\n<li>Chapter 5: Adaptive Regularization in Neural Network Modeling<\/li>\n<li>Chapter 6: Large Ensemble Averaging<\/li>\n<\/ul>\n<h3>Part 3: Improving Network Models and Algorithmic Tricks<\/h3>\n<ul>\n<li>Chapter 7: Square Unit Augmented, Radically Extended, Multilayer Perceptrons<\/li>\n<li>Chapter 8: A Dozen Tricks with Multitask Learning<\/li>\n<li>Chapter 9: Solving the Ill-Conditioning on Neural Network Learning<\/li>\n<li>Chapter 10: Centering Neural Network Gradient Factors<\/li>\n<li>Chapter 11: Avoiding Roundoff Error in Backpropagating Derivatives<\/li>\n<\/ul>\n<h3>Part 4: Representation and Incorporating PRior Knowledge in Neural Network Training<\/h3>\n<ul>\n<li>Chapter 12: Transformation Invariance in Pattern Recognition \u2013 Tangent Distance and Tangent Propagation<\/li>\n<li>Chapter 13: Combining Neural Networks and Context-Driven Search for On-Line Printed Handwriting Recognition in the Newton<\/li>\n<li>Chapter 14: Neural Network Classification and Prior Class Probabilities<\/li>\n<li>Chapter 15: Applying Divide and Conquer to Large Scale Pattern Recognition Tasks<\/li>\n<\/ul>\n<h3>Part 5: Tricks for Time Series<\/h3>\n<ul>\n<li>Chapter 16: Forecasting the Economy with Neural Nets: A Survey of Challenges and Solutions<\/li>\n<li>Chapter 17: How to Train Neural Networks<\/li>\n<\/ul>\n<p>It is an expensive book, and if you can pick-up a cheap second-hand copy of this first edition, then I highly recommend it.<\/p>\n<div class=\"woo-sc-hr\"><\/div>\n<p><center><\/p>\n<h3>Want Better Results with Deep Learning?<\/h3>\n<p>Take my free 7-day email crash course now (with sample code).<\/p>\n<p>Click to sign-up and also get a free PDF Ebook version of the course.<\/p>\n<p><a href=\"https:\/\/machinelearningmastery.lpages.co\/leadbox\/1433e7773f72a2%3A164f8be4f346dc\/5764144745676800\/\" target=\"_blank\" style=\"background: rgb(255, 206, 10); color: rgb(255, 255, 255); text-decoration: none; font-family: Helvetica, Arial, sans-serif; font-weight: bold; font-size: 16px; line-height: 20px; padding: 10px; display: inline-block; max-width: 300px; border-radius: 5px; text-shadow: rgba(0, 0, 0, 0.25) 0px -1px 1px; box-shadow: rgba(255, 255, 255, 0.5) 0px 1px 3px inset, rgba(0, 0, 0, 0.5) 0px 1px 3px;\">Download Your FREE Mini-Course<\/a><script data-leadbox=\"1433e7773f72a2:164f8be4f346dc\" data-url=\"https:\/\/machinelearningmastery.lpages.co\/leadbox\/1433e7773f72a2%3A164f8be4f346dc\/5764144745676800\/\" data-config=\"%7B%7D\" type=\"text\/javascript\" src=\"https:\/\/machinelearningmastery.lpages.co\/leadbox-1543333086.js\"><\/script><\/p>\n<p><\/center><\/p>\n<div class=\"woo-sc-hr\"><\/div>\n<h2>Additions in the Second Edition<\/h2>\n<p>The <a href=\"https:\/\/amzn.to\/2zwUmNM\">second edition<\/a> of the book was released in 2012, seemingly right at the beginning of the large push that became \u201cdeep learning.\u201d As such, the book captures the new techniques at the time such as layer-wise pretraining and restricted Boltzmann machines.<\/p>\n<p>It was too early to focus on the ReLU, ImageNet with CNNs, and use of large LSTMs.<\/p>\n<p>Nevertheless, the second edition included three new parts and 13 new chapters.<\/p>\n<p>The breakdown of the additions in the second edition are as follows:<\/p>\n<h3>Part 6: Big Learning in Deep Neural Networks<\/h3>\n<ul>\n<li>Chapter 18: Stochastic Gradient Descent Tricks<\/li>\n<li>Chapter 19: Practical Recommendations for Gradient-Based Training of Deep Architectures<\/li>\n<li>Chapter 20: Training Deep and Recurrent Networks with Hessian-Free Optimization<\/li>\n<li>Chapter 21: Implementing Neural Networks Efficiently<\/li>\n<\/ul>\n<h3>Part 7: Better Representations: Invariant, Disentangled and Reusable<\/h3>\n<ul>\n<li>Chapter 22: Learning Feature Representations with K-Means<\/li>\n<li>Chapter 23: Deep Big Multilayer Perceptrons for Digit Recognition<\/li>\n<li>Chapter 24: A Practical Guide to Training Restricted Boltzmann Machines<\/li>\n<li>Chapter 25: Deep Boltzmann Machines and the Centering Trick<\/li>\n<li>Chapter 26: Deep Learning via Semi-supervised Embedding<\/li>\n<\/ul>\n<h3>Part 8: Identifying Dynamical Systems for Forecasting and Control<\/h3>\n<ul>\n<li>Chapter 27: A Practical Guide to Applying Echo State Networks<\/li>\n<li>Chapter 28: Forecasting with Recurrent Neural Networks: 12 Tricks<\/li>\n<li>Chapter 29: Solving Partially Observable Reinforcement Learning Problems with Recurrent Neural Networks<\/li>\n<li>Chapter 30: 10 Steps and Some Tricks to Set up Neural Reinforcement Controllers<\/li>\n<\/ul>\n<h2>Must-Read Chapters<\/h2>\n<p>The whole book is a good read, although I don\u2019t recommend reading all of it if you are looking for quick and useful tips that you can use immediately.<\/p>\n<p>This is because many of the chapters focus on the writers\u2019 pet projects, or on highly specialized methods. Instead, I recommend reading four specific chapters, two from the first edition and two from the second.<\/p>\n<p>The <a href=\"https:\/\/amzn.to\/2zwUmNM\">second edition of the book<\/a> is worth purchasing for these four chapters alone, and I highly recommend picking up a copy for yourself, your team, or your office.<\/p>\n<p>Fortunately, there are pre-print PDFs of these chapters available for free online.<\/p>\n<p>The recommended chapters are:<\/p>\n<ul>\n<li><strong>Chapter 1<\/strong>: <a href=\"http:\/\/yann.lecun.com\/exdb\/publis\/pdf\/lecun-98b.pdf\">Efficient BackProp<\/a>, by Yann LeCun, et al.<\/li>\n<li><strong>Chapter 2<\/strong>: <a href=\"https:\/\/page.mi.fu-berlin.de\/prechelt\/Biblio\/stop_tricks1997.pdf\">Early Stopping \u2013 But When?<\/a>, by Lutz Prechelt.<\/li>\n<li><strong>Chapter 18<\/strong>: <a href=\"https:\/\/cilvr.cs.nyu.edu\/diglib\/lsml\/bottou-sgd-tricks-2012.pdf\">Stochastic Gradient Descent Tricks<\/a>, by Leon Bottou.<\/li>\n<li><strong>Chapter 19<\/strong>: <a href=\"https:\/\/arxiv.org\/abs\/1206.5533\">Practical Recommendations for Gradient-Based Training of Deep Architectures<\/a>, by Yoshua Bengio.<\/li>\n<\/ul>\n<p>Let\u2019s take a closer look at each of these chapters in turn.<\/p>\n<h3>Efficient BackProp<\/h3>\n<p>This chapter focuses on providing very specific tips to get the most out of the stochastic gradient descent optimization algorithm and the backpropagation weight update algorithm.<\/p>\n<blockquote>\n<p>Many undesirable behaviors of backprop can be avoided with tricks that are rarely exposed in serious technical publications. This paper gives some of those tricks, and offers explanations of why they work.<\/p>\n<\/blockquote>\n<p>\u2014 Page 9, <a href=\"https:\/\/amzn.to\/2SRyBBa\">Neural Networks: Tricks of the Trade<\/a>, First Edition, 1998.<\/p>\n<p>The chapter proceeds to provide a dense and theoretically supported list of tips for configuring the algorithm, preparing input data, and more.<\/p>\n<p>The chapter is so dense that it is hard to summarize, although a good list of recommendations is provided in the \u201c<em>Discussion and Conclusion<\/em>\u201d section at the end, quoted from the book below:<\/p>\n<blockquote>\n<p>\u2013 shuffle the examples<br \/>\n\u2013 center the input variables by subtracting the mean<br \/>\n\u2013 normalize the input variable to a standard deviation of 1<br \/>\n\u2013 if possible, decorrelate the input variables.<br \/>\n\u2013 pick a network with the sigmoid function shown in figure 1.4<br \/>\n\u2013 set the target values within the range of the sigmoid, typically +1 and -1.<br \/>\n\u2013 initialize the weights to random values as prescribed by 1.16.<\/p>\n<p>The preferred method for training the network should be picked as follows:<br \/>\n\u2013 if the training set is large (more than a few hundred samples) and redundant, and if the task is classification, use stochastic gradient with careful tuning, or use the stochastic diagonal Levenberg Marquardt method.<br \/>\n\u2013 if the training set is not too large, or if the task is regression, use conjugate gradient.<\/p>\n<\/blockquote>\n<p>\u2014 Pages 47-48, <a href=\"https:\/\/amzn.to\/2SRyBBa\">Neural Networks: Tricks of the Trade<\/a>, First Edition, 1998.<\/p>\n<p>The field of applied neural networks has come a long way in the twenty years since this was published (e.g. the comments on sigmoid activation functions are no longer relevant), yet the basics have not changed.<\/p>\n<p>This chapter is required reading for all deep learning practitioners.<\/p>\n<h3>Early Stopping \u2013 But When?<\/h3>\n<p>This chapter describes the simple yet powerful regularization method called early stopping that will halt the training of a neural network when the performance of the model begins to degrade on a hold-out validation dataset.<\/p>\n<blockquote>\n<p>Validation can be used to detect when overfitting starts during supervised training of a neural network; training is then stopped before convergence to avoid the overfitting (\u201cearly stopping\u201d)<\/p>\n<\/blockquote>\n<p>\u2014 Page 55, <a href=\"https:\/\/amzn.to\/2SRyBBa\">Neural Networks: Tricks of the Trade<\/a>, First Edition, 1998.<\/p>\n<p>The challenge of early stopping is the choice and configuration of the trigger used to stop the training process, and the systematic configuration of early stopping is the focus of the chapter.<\/p>\n<p>The general early stopping criteria are described as:<\/p>\n<ul>\n<li><strong>GL<\/strong>: stop as soon as the generalization loss exceeds a specified threshold.<\/li>\n<li><strong>PQ<\/strong>: stop as soon as the quotient of generalization loss and progress exceeds a threshold.<\/li>\n<li><strong>UP<\/strong>: stop when the generalization error increases in strips.<\/li>\n<\/ul>\n<p>Three recommendations are provided, e.g. \u201c<em>the trick<\/em>\u201c:<\/p>\n<blockquote>\n<p>1. Use fast stopping criteria unless small improvements of network performance (e.g. 4%) are worth large increases of training time (e.g. factor 4).<br \/>\n2. To maximize the probability of finding a \u201cgood\u201d solution (as opposed to maximizing the average quality of solutions), use a GL criterion.<br \/>\n3. To maximize the average quality of solutions, use a PQ criterion if the net- work overfits only very little or an UP criterion otherwise.<\/p>\n<\/blockquote>\n<p>\u2014 Page 60, <a href=\"https:\/\/amzn.to\/2SRyBBa\">Neural Networks: Tricks of the Trade<\/a>, First Edition, 1998.<\/p>\n<p>The rules are analyzed empirically over a large number of training runs and test problems. The crux of the finding is that being more patient with the early stopping criteria results in better hold-out performance at the cost of additional computational complexity.<\/p>\n<blockquote>\n<p>I conclude slower stopping criteria allow for small improvements in generalization (here: about 4% on average), but cost much more training time (here: about factor 4 longer on average).<\/p>\n<\/blockquote>\n<p>\u2014 Page 55, <a href=\"https:\/\/amzn.to\/2SRyBBa\">Neural Networks: Tricks of the Trade<\/a>, First Edition, 1998.<\/p>\n<h3>Stochastic Gradient Descent Tricks<\/h3>\n<p>This chapter focuses on a detailed review of the stochastic gradient descent optimization algorithm and tips to help get the most out of it.<\/p>\n<blockquote>\n<p>This chapter provides background material, explains why SGD is a good learning algorithm when the training set is large, and provides useful recommendations.<\/p>\n<\/blockquote>\n<p>\u2014 Page 421, <a href=\"https:\/\/amzn.to\/2zwUmNM\">Neural Networks: Tricks of the Trade<\/a>, Second Edition, 2012.<\/p>\n<p>There is a lot of overlap with <em>Chapter 1: Efficient BackProp<\/em>, and although the chapter calls out tips along the way with boxes, a useful list of tips is not summarized at the end of the chapter.<\/p>\n<p>Nevertheless, it is a compulsory read for all neural network practitioners.<\/p>\n<p>Below is my own summary of the tips called out in boxes throughout the chapter, mostly quoting directly from the second edition:<\/p>\n<ul>\n<li>Use stochastic gradient descent (batch=1) when training time is the bottleneck.<\/li>\n<li>Randomly shuffle the training examples.<\/li>\n<li>Use preconditioning techniques.<\/li>\n<li>Monitor both the training cost and the validation error.<\/li>\n<li>Check the gradients using finite differences.<\/li>\n<li>Experiment with the learning rates [with] a small sample of the training set.<\/li>\n<li>Leverage the sparsity of the training examples.<\/li>\n<li>Use a decaying learning rate.<\/li>\n<li>Try averaged stochastic gradient (i.e. a specific variant of the algorithm).<\/li>\n<\/ul>\n<p>Some of these tips are pithy without context; I recommend reading the chapter.<\/p>\n<h3>Practical Recommendations for Gradient-Based Training of Deep Architectures<\/h3>\n<p>This chapter focuses on the effective training of neural networks and early deep learning models.<\/p>\n<p>It ties together the classical advice from Chapters 1 and 29 but adds comments on (at the time) recent deep learning developments like greedy layer-wise pretraining, modern hardware like GPUs, modern efficient code libraries like BLAS, and advice from real projects tuning the training of models, like the order to train hyperparameters.<\/p>\n<blockquote>\n<p>This chapter is meant as a practical guide with recommendations for some of the most commonly used hyper-parameters, in particular in the context of learning algorithms based on backpropagated gradient and gradient-based optimization.<\/p>\n<\/blockquote>\n<p>\u2014 Page 437, <a href=\"https:\/\/amzn.to\/2zwUmNM\">Neural Networks: Tricks of the Trade<\/a>, Second Edition, 2012.<\/p>\n<p>It\u2019s also long, divided into six main sections:<\/p>\n<ul>\n<li><strong>Deep Learning Innovations<\/strong>. Including greedy layer-wise pretraining, denoising autoencoders, and online learning.<\/li>\n<li><strong>Gradients<\/strong>. Including mini-batch gradient descent and automatic differentiation.<\/li>\n<li><strong>Hyperparameters<\/strong>. Including learning rate, mini-batch size, epochs, momentum, nodes, weight regularization, activity regularization, hyperparameter search, and recommendations.<\/li>\n<li><strong>Debugging<\/strong> and Analysis. Including monitoring loss for overfitting, visualization, and statistics.<\/li>\n<li><strong>Other Recommendations<\/strong>. Including GPU hardware and use of efficient linear algebra libraries such as BLAS.<\/li>\n<li><strong>Open Questions<\/strong>. Including the difficulty of training deep models and adaptive learning rates.<\/li>\n<\/ul>\n<p>There\u2019s far too much for me to summarize; the chapter is dense with useful advice for configuring and tuning neural network models.<\/p>\n<p>Without a doubt, this is required reading and provided the seeds for the recommendations later described in the 2016 book <a href=\"https:\/\/amzn.to\/2F6AUgz\">Deep Learning<\/a>, of which Yoshua Bengio was one of three authors.<\/p>\n<p>The chapter finishes on a strong, optimistic note.<\/p>\n<blockquote>\n<p>The practice summarized here, coupled with the increase in available computing power, now allows researchers to train neural networks on a scale that is far beyond what was possible at the time of the first edition of this book, helping to move us closer to artificial intelligence.<\/p>\n<\/blockquote>\n<p>\u2014 Page 473, <a href=\"https:\/\/amzn.to\/2zwUmNM\">Neural Networks: Tricks of the Trade<\/a>, Second Edition, 2012.<\/p>\n<h2>Further Reading<\/h2>\n<h3>Get the Book on Amazon<\/h3>\n<ul>\n<li><a href=\"https:\/\/amzn.to\/2SRyBBa\">Neural Networks: Tricks of the Trade<\/a>, First Edition, 1998.<\/li>\n<li><a href=\"https:\/\/amzn.to\/2zwUmNM\">Neural Networks: Tricks of the Trade<\/a>, Second Edition, 2012.<\/li>\n<\/ul>\n<h3>Other Book Pages<\/h3>\n<ul>\n<li><a href=\"https:\/\/link.springer.com\/book\/10.1007%2F978-3-642-35289-8\">Neural Networks: Tricks of the Trade<\/a>, Second Edition, 2012. Springer Homepage.<\/li>\n<li><a href=\"https:\/\/books.google.com.au\/books?id=M5O6BQAAQBAJ\">Neural Networks: Tricks of the Trade<\/a>, Second Edition, 2012. Google Books<\/li>\n<\/ul>\n<h3>Pre-Prints of Recommended Chapters<\/h3>\n<ul>\n<li><a href=\"http:\/\/yann.lecun.com\/exdb\/publis\/pdf\/lecun-98b.pdf\">Efficient BackProp<\/a>, 1998.<\/li>\n<li><a href=\"https:\/\/page.mi.fu-berlin.de\/prechelt\/Biblio\/stop_tricks1997.pdf\">Early Stopping \u2013 But When?<\/a>, 1998.<\/li>\n<li><a href=\"https:\/\/cilvr.cs.nyu.edu\/diglib\/lsml\/bottou-sgd-tricks-2012.pdf\">Stochastic Gradient Descent Tricks<\/a>, 2012.<\/li>\n<li><a href=\"https:\/\/arxiv.org\/abs\/1206.5533\">Practical Recommendations for Gradient-Based Training of Deep Architectures<\/a>, 2012.<\/li>\n<\/ul>\n<h2>Summary<\/h2>\n<p>In this post, you discovered the book \u201c<em>Neural Networks: Tricks of the Trade<\/em>\u201d that provides advice from neural network academics and practitioners on how to get the most out of your models.<\/p>\n<p>Have you read some or all of this book? What do you think of it?<br \/>\nLet me know in the comments below.<\/p>\n<p>The post <a rel=\"nofollow\" href=\"https:\/\/machinelearningmastery.com\/neural-networks-tricks-of-the-trade-review\/\">Neural Networks: Tricks of the Trade Review<\/a> appeared first on <a rel=\"nofollow\" href=\"https:\/\/machinelearningmastery.com\/\">Machine Learning Mastery<\/a>.<\/p>\n<\/div>\n<p><a href=\"https:\/\/machinelearningmastery.com\/neural-networks-tricks-of-the-trade-review\/\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Jason Brownlee Deep learning neural networks are challenging to configure and train. There are decades of tips and tricks spread across hundreds of research [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2019\/02\/19\/neural-networks-tricks-of-the-trade-review\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":1746,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/1745"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=1745"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/1745\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/1746"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=1745"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=1745"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=1745"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}