{"id":1514,"date":"2018-12-29T06:35:59","date_gmt":"2018-12-29T06:35:59","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2018\/12\/29\/announcement-winner-of-the-data-science-central-competition\/"},"modified":"2018-12-29T06:35:59","modified_gmt":"2018-12-29T06:35:59","slug":"announcement-winner-of-the-data-science-central-competition","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2018\/12\/29\/announcement-winner-of-the-data-science-central-competition\/","title":{"rendered":"Announcement: Winner of the Data Science Central Competition"},"content":{"rendered":"<p>Author: Vincent Granville<\/p>\n<div>\n<p>Back in 2017, we posted a problem related to stochastic processes and controlled random walks, offering a $2,000 award for a sound solution, see <a href=\"https:\/\/www.analyticbridge.datasciencecentral.com\/profiles\/blogs\/interesting-probability-problem-for-serious-geeks\" target=\"_blank\" rel=\"noopener\">here<\/a> for full details. The problem, which had a Fintech flavor, was only solved recently (December 2018) by\u00a0Victor Zurkowski.<\/p>\n<p><strong>About the problem:<\/strong><\/p>\n<p>Let&#8217;s start with<span>\u00a0<\/span><em>X<\/em>(1) = 0, and define<span>\u00a0<\/span><em>X<\/em>(<em>k<\/em>) recursively as follows, for<span>\u00a0<\/span><em>k<\/em><span>\u00a0<\/span>> 1:<\/p>\n<p><a href=\"https:\/\/i.imgur.com\/ozpssO4.png\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/ozpssO4.png?width=327\" width=\"327\" class=\"align-center\"><\/a><\/p>\n<p><a href=\"https:\/\/i.imgur.com\/0UkIUnK.png\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/0UkIUnK.png?width=324\" width=\"324\" class=\"align-center\"><\/a><\/p>\n<p>and let&#8217;s define<span>\u00a0<\/span><em>U<\/em>(<em>k<\/em>),<span>\u00a0<\/span><em>Z<\/em>(<em>k<\/em>), and<span>\u00a0<\/span><em>Z<\/em><span>\u00a0<\/span>as follows:<\/p>\n<p><a href=\"https:\/\/i.imgur.com\/ZtV2AJl.png\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/ZtV2AJl.png?width=120\" width=\"120\" class=\"align-center\"><\/a><\/p>\n<p><a href=\"https:\/\/i.imgur.com\/H1iOwc3.png\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/H1iOwc3.png?width=116\" width=\"116\" class=\"align-center\"><\/a><\/p>\n<p><a href=\"https:\/\/i.imgur.com\/jT3Y5xF.png\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/jT3Y5xF.png?width=113\" width=\"113\" class=\"align-center\"><\/a><\/p>\n<p>where the<span>\u00a0<\/span><em>V<\/em>(<em>k<\/em>)&#8217;s are deviates from<span>\u00a0<\/span><em>independent<\/em><span>\u00a0<\/span>uniform variables on [0, 1]. So there are two<span>\u00a0<\/span><em>positive<\/em><span>\u00a0<\/span>parameters in this problem,<span>\u00a0<\/span><em>a<\/em><span>\u00a0<\/span>and<span>\u00a0<\/span><em>b<\/em>, and<span>\u00a0<\/span><em>U<\/em>(<em>k<\/em>) is always between 0 and 1. When<span>\u00a0<\/span><em>b<\/em>\u00a0= 1, the<span>\u00a0<\/span><em>U<\/em>(<em>k<\/em>)&#8217;s are just standard uniform deviates, and if\u00a0<em>b<\/em>\u00a0= 0, then\u00a0<em>U<\/em>(<em>k<\/em>) = 1. The case<span>\u00a0<\/span><em>a<\/em><span>\u00a0<\/span>=<span>\u00a0<\/span><em>b<\/em><span>\u00a0<\/span>= 0 is degenerate and should be ignored. The case<span>\u00a0<\/span><em>a<\/em><span>\u00a0<\/span>> 0 and<span>\u00a0<\/span><em>b<\/em><span>\u00a0<\/span>= 0 is of special interest, and it is a number theory problem in itself,<span>\u00a0<\/span><a href=\"http:\/\/www.datasciencecentral.com\/profiles\/blogs\/new-representation-of-numbers-with-very-fast-converging-fractions\" target=\"_blank\" rel=\"noopener\">related to this problem<\/a>\u00a0when<span>\u00a0<\/span><em>a<\/em><span>\u00a0<\/span>= 1. Also, just like in random walks or Markov chains, the<span>\u00a0<\/span><em>X<\/em>(<em>k<\/em>)&#8217;s are not independent; they are indeed highly auto-correlated.<\/p>\n<p>Prove that if<span>\u00a0<\/span><em>a<\/em><span>\u00a0<\/span>< 1, then\u00a0\u00a0<em>X<\/em>(<em>k<\/em>) converges to 0 as<span>\u00a0<\/span><em>k<\/em>\u00a0increases. Under the same condition, prove that the limiting distribution<span>\u00a0<\/span><em>Z<\/em><\/p>\n<ul>\n<li>always exists, (Note: if<span>\u00a0<\/span><em>a<\/em><span>\u00a0<\/span>> 1,<span>\u00a0<\/span><em>X<\/em>(<em>k<\/em>) may not converge to zero, causing a drift and asymmetry)<\/li>\n<li>always takes values between -1 and +1, with min(<em>Z<\/em>) = -1 and max(<em>Z<\/em>) = +1,<\/li>\n<li>is symmetric, with mean and median equal to 0<\/li>\n<li>and does not depend on<span>\u00a0<\/span><em>a,<\/em>\u00a0but only on<span>\u00a0<\/span><em>b.<\/em><\/li>\n<\/ul>\n<p>For instance, for<span>\u00a0<\/span><em>b<\/em><span>\u00a0<\/span>=1, even<span>\u00a0<\/span><em>a<\/em>\u00a0= 0 yields the same triangular distribution for<span>\u00a0<\/span><em>Z<\/em>, as any<span>\u00a0<\/span><em>a<\/em>\u00a0 > 0.<\/p>\n<p>Main question: In general, what is the limiting distribution of\u00a0<em>Z<\/em>? I guessed that the solution (which implied solving a stochastic integral solution) was<\/p>\n<p><a href=\"https:\/\/i.imgur.com\/dTxllJB.png?width=211\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/dTxllJB.png?width=211\" class=\"align-center\"><\/a><\/p>\n<p><strong>About the author and the solution:<\/strong><\/p>\n<p>Victor not only confirmed that the above density function is a solution to this problem, but also that the solution is unique, focusing on convergence issues, in a 27-page long paper. One detail still needs to be worked out: whether or not<span>\u00a0scaled <em>Z<\/em> visits the neighborhood of every point in [-1,1] infinitely often<\/span>. Victor believes that the answer is positive. You can read his solution <a href=\"https:\/\/github.com\/victorz-ca\/Granville_Problem\" target=\"_blank\" rel=\"noopener\">here<\/a>, and we hope it will result in a publication in a scientific journal.<\/p>\n<p><a href=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/544155599?profile=original\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" src=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/544155599?profile=original\" class=\"align-center\"><\/a><\/p>\n<p><em><a href=\"https:\/\/www.linkedin.com\/in\/victorzurkowski\/\" target=\"_blank\" rel=\"noopener\">Victor Zurkowski, PhD<\/a>, is a predictive modeling, machine learning, and optimization expert with 20+ years of experience, with deep expertise developing pricing models and optimization engines across industries, including Retail, Financial Services. He published various academic papers in Mathematics and Statistics across numerous topics, and is currently\u00a0Assistant Professor \/ Gibbs Instructor of Mathematics at Yale University. Victor holds a Ph.D. in Mathematics from the University of Minnesota and an M.Sc. in Statistics from the University of Toronto.<\/em><\/p>\n<p><span style=\"font-size: 14pt;\"><strong>DSC Resources<\/strong><\/span><\/p>\n<ul>\n<li><a href=\"https:\/\/www.datasciencecentral.com\/profiles\/blogs\/new-books-and-resources-for-dsc-members\">Book and Resources for DSC Members<\/a><\/li>\n<li><a href=\"https:\/\/www.datasciencecentral.com\/profiles\/blogs\/comprehensive-repository-of-data-science-and-ml-resources\">Comprehensive Repository of Data Science and ML Resources<\/a><\/li>\n<li><a href=\"https:\/\/www.datasciencecentral.com\/profiles\/blogs\/advanced-machine-learning-with-basic-excel\">Advanced Machine Learning with Basic Excel<\/a><\/li>\n<li><a href=\"https:\/\/www.datasciencecentral.com\/profiles\/blogs\/difference-between-machine-learning-data-science-ai-deep-learning\">Difference between ML, Data Science, AI, Deep Learning, and Statistics<\/a><\/li>\n<li><a href=\"https:\/\/www.datasciencecentral.com\/profiles\/blogs\/my-data-science-machine-learning-and-related-articles\">Selected Business Analytics, Data Science and ML articles<\/a><\/li>\n<li><a href=\"http:\/\/careers.analytictalent.com\/jobs\/products\">Hire a Data Scientist<\/a><span>\u00a0<\/span>|<span>\u00a0<\/span><a href=\"http:\/\/www.datasciencecentral.com\/page\/search?q=Python\">Search DSC<\/a><span>\u00a0<\/span>|<span>\u00a0<\/span><a href=\"http:\/\/www.analytictalent.com\/\">Find a Job<\/a><\/li>\n<li><a href=\"http:\/\/www.datasciencecentral.com\/profiles\/blog\/new\">Post a Blog<\/a><span>\u00a0<\/span>|<span>\u00a0<\/span><a href=\"http:\/\/www.datasciencecentral.com\/forum\/topic\/new\">Forum Questions<\/a><\/li>\n<\/ul>\n<\/div>\n<p><a href=\"https:\/\/www.datasciencecentral.com\/xn\/detail\/6448529:BlogPost:789228\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Vincent Granville Back in 2017, we posted a problem related to stochastic processes and controlled random walks, offering a $2,000 award for a sound [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2018\/12\/29\/announcement-winner-of-the-data-science-central-competition\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":462,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[26],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/1514"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=1514"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/1514\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/458"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=1514"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=1514"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=1514"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}