{"id":3441,"date":"2020-05-12T06:30:28","date_gmt":"2020-05-12T06:30:28","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2020\/05\/12\/differential-ml-on-tensorflow-and-colab\/"},"modified":"2020-05-12T06:30:28","modified_gmt":"2020-05-12T06:30:28","slug":"differential-ml-on-tensorflow-and-colab","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2020\/05\/12\/differential-ml-on-tensorflow-and-colab\/","title":{"rendered":"Differential ML on TensorFlow and Colab"},"content":{"rendered":"<p>Author: Antoine Savine<\/p>\n<div>\n<p id=\"2cd7\" class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\"><a href=\"https:\/\/www.risk.net\/awards\/2133160\/quants-year-jesper-andreasen-and-brian-huge-danske-bank\" class=\"cg hm kf kg kh ki\" target=\"_blank\" rel=\"noopener nofollow noreferrer\">Brian Huge<\/a><span>&nbsp;<\/span>and I just posted a<span>&nbsp;<\/span><a href=\"https:\/\/arxiv.org\/abs\/2005.02347\" class=\"cg hm kf kg kh ki\" target=\"_blank\" rel=\"noopener nofollow noreferrer\">working paper<\/a><span>&nbsp;<\/span>following six months of research and development on function approximation by artificial intelligence (AI) in Danske Bank. One major finding was that training machine learning (ML) models for regression (i.e. prediction of values, not classes) may be massively improved when<span>&nbsp;<\/span><em class=\"kj\">the gradients of training labels wrt training inputs<span>&nbsp;<\/span><\/em>are available. Given those<span>&nbsp;<\/span><em class=\"kj\">differential labels<\/em>, we can write simple, yet unreasonably effective training algorithms, capable of learning accurate function approximations with remarkable speed and accuracy from small datasets, in a stable manner, without need of additional regularization or optimization of hyperparameters, e.g. by cross validation.<\/p>\n<blockquote class=\"kk kl km\">\n<p id=\"a5ae\" class=\"jq kc ff kj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\">In this post, we briefly summarize these algorithms under the name<span>&nbsp;<\/span><em class=\"bj\">differential machine learning<\/em>, highlighting the main intuitions and benefits and commenting TensorFlow implementation code. All the details are found in the working paper,<span>&nbsp;<\/span><a href=\"https:\/\/github.com\/differential-machine-learning\/appendices\" class=\"cg hm kf kg kh ki\" target=\"_blank\" rel=\"noopener nofollow noreferrer\">the online appendices<\/a><span>&nbsp;<\/span>and<span>&nbsp;<\/span><a href=\"https:\/\/github.com\/differential-machine-learning\/notebooks\" class=\"cg hm kf kg kh ki\" target=\"_blank\" rel=\"noopener nofollow noreferrer\">the Colab notebooks<\/a>.<\/p>\n<\/blockquote>\n<p id=\"3b0d\" class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\">In the context of financial Derivatives pricing approximation, training sets are<span>&nbsp;<\/span><em class=\"kj\">simulated<span>&nbsp;<\/span><\/em>with Monte-Carlo models. Each training example is simulated on one Monte-Carlo path, where the label is the final<span>&nbsp;<\/span><em class=\"kj\">payoff<\/em><span>&nbsp;<\/span>of a transaction and the input is the initial state vector of the market. Differential labels are the<span>&nbsp;<\/span><em class=\"kj\">pathwise gradients<span>&nbsp;<\/span><\/em>of the payoff wrt to the state, and efficiently computed with<span>&nbsp;<\/span><a class=\"cg hm kf kg kh ki\" target=\"_blank\" rel=\"noopener noreferrer\" href=\"https:\/\/towardsdatascience.com\/automatic-differentiation-15min-video-tutorial-with-application-in-machine-learning-and-finance-333e18c0ecbb\">Automatic Adjoint Differentiation (AAD)<\/a>. For this reason, differential machine learning<em class=\"kj\"><span>&nbsp;<\/span><\/em>is particularly effective in finance, although it is also applicable in all other situations where high quality first order derivatives wrt training inputs are available.<\/p>\n<p id=\"fbea\" class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\">Models are trained on augmented datasets of not only inputs and labels but also differentials:<\/p>\n<p class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\"><a href=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/4917079689?profile=original\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" src=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/4917079689?profile=RESIZE_710x\" class=\"align-center\"><\/a><\/p>\n<div class=\"ko kp cd kq ai\">\n<div class=\"ep eq kn\">\n<div class=\"ik r cd il\">\n<div class=\"kr in r\"><\/div>\n<\/div>\n<\/div>\n<\/div>\n<p id=\"b25c\" class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\">by minimization of the combined cost of prediction errors on values<span>&nbsp;<\/span><em class=\"kj\">and derivatives<\/em>:<\/p>\n<p class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\"><a href=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/4917093098?profile=original\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" src=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/4917093098?profile=RESIZE_710x\" class=\"align-center\"><\/a><\/p>\n<p>The value and derivative&nbsp;<em class=\"kj\">labels&nbsp;<\/em>are given. We compute&nbsp;<em class=\"kj\">predicted&nbsp;<\/em>values by inference, as customary, and&nbsp;<em class=\"kj\">predicted&nbsp;<\/em>derivatives by backpropagation. Although the methodology is applicable to architectures of arbitrary complexity, we discuss it here in the context of vanilla feedforward networks in the interest of simplicity.<\/p>\n<p id=\"246a\" class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\">Recall vanilla feedforward equations:<\/p>\n<p class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\"><a href=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/4917106695?profile=original\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" src=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/4917106695?profile=RESIZE_710x\" class=\"align-center\"><\/a><\/p>\n<div class=\"ep eq ku\">\n<div class=\"ik r cd il\">\n<div class=\"kv in r\"><\/div>\n<\/div>\n<\/div>\n<p>where the notations are standard and specified in the paper (index 3 is for consistence with the paper).<\/p>\n<p id=\"7f61\" class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\">All the code in this post is extracted from the demonstration notebook, which also includes comments and practical implementation details.<\/p>\n<div class=\"ep eq la\">\n<div class=\"ik r cd il\">\n<div class=\"lb in r\"><img loading=\"lazy\" decoding=\"async\" class=\"pm rn s t u ih ai ir\" width=\"172\" height=\"48\" src=\"https:\/\/miro.medium.com\/max\/172\/1*IkTylN1V6Iv-nDDVIPdbdQ.png\"><\/div>\n<\/div>\n<\/div>\n<p><a href=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/4917142685?profile=original\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" src=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/4917142685?profile=RESIZE_710x\" class=\"align-full\"><\/a><\/p>\n<p>Below is a TensorFlow (1.x) implementation of the feedforward equations. We chose to write matrix operations explicitly in place of high level Keras layers to highlight equations in code. We chose softplus activation. ELU is another alternative. For reasons explained in the paper, activation must be continuously differentiable, ruling out e.g. RELU and SELU.<\/p>\n<p>Source: <a href=\"https:\/\/gist.githubusercontent.com\/differential-machine-learning\/9ac877e1b4ade9c0cbc96b142cec6404\/raw\/f8c0b1fd2a93f786922af6f41fa56e7945c2d4d6\/vanilla_net.py\" target=\"_blank\" rel=\"noopener noreferrer\">here<\/a><\/p>\n<div class=\"ik r cd\"><\/div>\n<p id=\"7a84\" class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\">Derivatives of output wrt inputs are predicted with backpropagation. Recall backpropagation equations are derived as<span>&nbsp;<\/span><em class=\"kj\">adjoints<span>&nbsp;<\/span><\/em>of the feedforward equations, or see<span>&nbsp;<\/span><a class=\"cg hm kf kg kh ki\" target=\"_blank\" rel=\"noopener noreferrer\" href=\"https:\/\/towardsdatascience.com\/automatic-differentiation-15min-video-tutorial-with-application-in-machine-learning-and-finance-333e18c0ecbb\">our tutorial<\/a><span>&nbsp;<\/span>for a refresh:<\/p>\n<p class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\"><a href=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/4917210099?profile=original\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" src=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/4917210099?profile=RESIZE_710x\" class=\"align-center\"><\/a><\/p>\n<div class=\"ep eq ld\">\n<div class=\"ik r cd il\">\n<div class=\"le in r\"><\/div>\n<\/div>\n<\/div>\n<p id=\"f460\" class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\">Or in code, recalling that the derivative of softplus is sigmoid:<\/p>\n<p class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\"><a href=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/4917233877?profile=original\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" src=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/4917233877?profile=RESIZE_710x\" class=\"align-full\"><\/a><\/p>\n<div class=\"ik r cd\"><\/div>\n<p id=\"8883\" class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\">Once again, we wrote backpropagation equations explicitly in place of a call to<span>&nbsp;<\/span><em class=\"kj\">tf.gradients()<\/em>. We chose to do it this way, first, to highlight equations in code again, and also, to avoid nesting layers of backpropagation during training, as seen next. For avoidance of doubt, replacing this code by one call to<span>&nbsp;<\/span><em class=\"kj\">tf.gradients()<span>&nbsp;<\/span><\/em>works too.<\/p>\n<p id=\"bf2e\" class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\">Next, we combine feedforward and backpropagation in<span>&nbsp;<\/span><em class=\"kj\">one<span>&nbsp;<\/span><\/em>network, which we call<span>&nbsp;<\/span><em class=\"kj\">twin network<\/em>, a neural network of twice the depth, capable of simultaneously predicting values<span>&nbsp;<\/span><em class=\"kj\">and derivatives<span>&nbsp;<\/span><\/em>for twice computation cost:<\/p>\n<p class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\"><a href=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/4917285675?profile=original\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" src=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/4917285675?profile=RESIZE_710x\" class=\"align-center\"><\/a><\/p>\n<p class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\">\n<p class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\"><a href=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/4917268457?profile=original\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" src=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/4917268457?profile=RESIZE_710x\" class=\"align-full\"><\/a><\/p>\n<p id=\"a6e9\" class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\">The twin network is beneficial in two ways.<span>&nbsp;<\/span><em class=\"kj\">After training<\/em>, it efficiently predicts values and derivatives given inputs in applications where derivatives predictions are desirable. In finance, for example, they are sensitivities of prices to market state variables, also called<span>&nbsp;<\/span><em class=\"kj\">Greeks<span>&nbsp;<\/span><\/em>(because traders give them Greek letters), and also correspond to<span>&nbsp;<\/span><em class=\"kj\">hedge ratios<\/em>.<\/p>\n<p id=\"dadf\" class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\">The twin network is also a fundamental construct for<span>&nbsp;<\/span><em class=\"kj\">differential training<\/em>. The combined cost function is computed by inference through the twin network, predicting values and derivatives. The<span>&nbsp;<\/span><em class=\"kj\">gradients<span>&nbsp;<\/span><\/em>of the cost function are computed by backpropagation through the twin network, including the backpropagation part, silently conducted by TensorFlow as part of its optimization loop. Recall the standard training loop for neural networks:<\/p>\n<p>Source: see <a href=\"https:\/\/gist.githubusercontent.com\/differential-machine-learning\/f72092f15b7fa585dda275e398e69ea4\/raw\/e6ed8386e9195356b820a9221d13f7c968461a9b\/vanilla_loop.py\" target=\"_blank\" rel=\"noopener noreferrer\">here<\/a><\/p>\n<div class=\"ik r cd\">\n<div class=\"uz in r\">The differential training loop is virtually identical, safe for the definition of the cost function, now combining mean squared errors on values and derivatives:<\/div>\n<div class=\"uz in r\"><\/div>\n<div class=\"uz in r\">Source: see&nbsp;<a href=\"https:\/\/gist.githubusercontent.com\/differential-machine-learning\/c829b18d6421fad42120db3f09f40543\/raw\/9cc6aaa88230b51e089d06e9870cfecdb55ed1d1\/differential_loop.py\" target=\"_blank\" rel=\"noopener noreferrer\">here<\/a><\/div>\n<\/div>\n<div class=\"ik r cd\"><\/div>\n<p class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\">\n<p id=\"57e3\" class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\">TensorFlow differentiates the twin network seamlessly behind the scenes for the needs of optimization. It doesn&rsquo;t matter that part of the network is itself a backpropagation. This is just another sequence of matrix operations, which TensorFlow differentiates without difficulty.<\/p>\n<p id=\"90f8\" class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\">The rest of the notebook deals with standard data preparation, training and testing and the application to a couple of textbook datasets in finance: European calls in Black &amp; Scholes, and basket options in correlated Bachelier. The results demonstrate the<span>&nbsp;<\/span><em class=\"kj\">unreasonable<span>&nbsp;<\/span><\/em>effectiveness of differential deep learning.<\/p>\n<p class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\"><a href=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/4917338080?profile=original\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" src=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/4917338080?profile=RESIZE_710x\" class=\"align-center\"><\/a><\/p>\n<p id=\"8e46\" class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\"><a href=\"https:\/\/github.com\/differential-machine-learning\/appendices\" class=\"cg hm kf kg kh ki\" target=\"_blank\" rel=\"noopener nofollow noreferrer\">In the online appendices<\/a>, we explored applications of differential machine learning to other kinds of ML models, like basis function regression and principal component analysis (PCA), with equally remarkable results.<\/p>\n<p id=\"592b\" class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\">Differential training imposes a penalty on incorrect derivatives in the same way that conventional regularization like ridge\/Tikhonov favors small weights. Contrarily to conventional regularization, differential ML effectively mitigates overfitting<span>&nbsp;<\/span><em class=\"kj\">without introducing a bias<\/em>. Hence, there is no bias-variance tradeoff or necessity to tweak hyperparameters by cross validation. It just works.<\/p>\n<p id=\"e6f4\" class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\">Differential machine learning is more similar to<span>&nbsp;<\/span><em class=\"kj\">data augmentation<\/em>, which in turn may be seen as a better form of regularization. Data augmentation is consistently applied e.g. in computer vision with documented success. The idea is to produce multiple labeled images from a single one, e.g. by cropping, zooming, rotation or recoloring. In addition to extending the training set for negligible cost, data augmentation teaches the ML model important invariances. Similarly, derivatives labels, not only increase the amount of information in the training set for very small cost (as long as they are computed with AAD), but also teach ML models the<span>&nbsp;<\/span><em class=\"kj\">shape<\/em><span>&nbsp;<\/span>of pricing functions.<\/p>\n<p id=\"46b8\" class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\"><strong class=\"js lj\">Working paper<\/strong>:<span>&nbsp;<\/span><a href=\"https:\/\/arxiv.org\/abs\/2005.02347\" class=\"cg hm kf kg kh ki\" target=\"_blank\" rel=\"noopener nofollow noreferrer\">https:\/\/arxiv.org\/abs\/2005.02347<\/a><br \/><strong class=\"js lj\">Github repo<\/strong>:<span>&nbsp;<\/span><a href=\"https:\/\/github.com\/differential-machine-learning\" class=\"cg hm kf kg kh ki\" target=\"_blank\" rel=\"noopener nofollow noreferrer\">github.com\/differential-machine-learning<\/a><br \/><strong class=\"js lj\">Colab Notebook<\/strong>:<span>&nbsp;<\/span><a href=\"https:\/\/colab.research.google.com\/github\/differential-machine-learning\/notebooks\/blob\/master\/DifferentialML.ipynb\" class=\"cg hm kf kg kh ki\" target=\"_blank\" rel=\"noopener nofollow noreferrer\">https:\/\/colab.research.google.com\/github\/differential-machine-learning\/notebooks\/blob\/master\/DifferentialML.ipynb<\/a><\/p>\n<p id=\"7147\" class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\"><a href=\"http:\/\/antoinesavine.com\/\" class=\"cg hm kf kg kh ki\" target=\"_blank\" rel=\"noopener nofollow noreferrer\">Antoine Savine<\/a><\/p>\n<p class=\"jq kc ff bj js b fy jt kd ga ju ke jv jw gl jx jy gm jz ka gn kb ex\"><em>Originally posted <a href=\"https:\/\/towardsdatascience.com\/differential-machine-learning-f207c158064d\" target=\"_blank\" rel=\"noopener noreferrer\">here<\/a><\/em><\/p>\n<\/p>\n<\/div>\n<p><a href=\"https:\/\/www.datasciencecentral.com\/xn\/detail\/6448529:BlogPost:950060\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Antoine Savine Brian Huge&nbsp;and I just posted a&nbsp;working paper&nbsp;following six months of research and development on function approximation by artificial intelligence (AI) in Danske [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2020\/05\/12\/differential-ml-on-tensorflow-and-colab\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":473,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[26],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/3441"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=3441"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/3441\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/456"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=3441"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=3441"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=3441"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}