{"id":2389,"date":"2019-07-23T06:34:11","date_gmt":"2019-07-23T06:34:11","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2019\/07\/23\/measuring-progress-toward-agi-is-hard\/"},"modified":"2019-07-23T06:34:11","modified_gmt":"2019-07-23T06:34:11","slug":"measuring-progress-toward-agi-is-hard","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2019\/07\/23\/measuring-progress-toward-agi-is-hard\/","title":{"rendered":"Measuring Progress Toward AGI Is Hard"},"content":{"rendered":"<p>Author: William Vorhies<\/p>\n<div>\n<p><strong><em>Summary:<\/em><\/strong><em>\u00a0 Artificial General Intelligence (AGI) is still a ways off in the future but surprisingly there\u2019s been very little conversation about how to measure if we\u2019re getting close.\u00a0 This article reviews a proposal to benchmark existing AIs against animal capabilities in an Animal-AI Olympics.\u00a0 It\u2019s a real thing and just now accepting entrants.<\/em><\/p>\n<p>\u00a0<a href=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/3371503542?profile=original\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" src=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/3371503542?profile=RESIZE_710x\" width=\"500\" class=\"align-center\"><\/a><\/p>\n<p>Artificial General Intelligence (AGI) is something that many AI researchers have an opinion about but with surprisingly little consistency.\u00a0 We believe broadly that achieving human-level AGI requires a system that has all of the following:<\/p>\n<ul>\n<li>Consciousness: To have subjective experience and thought.<\/li>\n<li>Self-awareness: To be aware of oneself as a separate individual, especially to be aware of one\u2019s own thoughts and uniqueness.<\/li>\n<li>Sentience: The ability to feel perceptions or emotions subjectively.<\/li>\n<li>Sapience: The capacity for wisdom.<\/li>\n<\/ul>\n<p>OK, so those are the characteristics described in science fiction.\u00a0 We\u2019d probably think we are pretty close if it could just:<\/p>\n<ol>\n<li>Learn from one source and apply it another completely unrelated field. In other words, generalize.<\/li>\n<li>That is recall a task once learned and again, apply it to other data or other environments.<\/li>\n<li>Be small and fast. Today\u2019s systems are very energy hungry which stands in the way of making them tiny.<\/li>\n<li>Learn in a truly unsupervised manner.<\/li>\n<\/ol>\n<p>There\u2019s also quite a wide range of opinion about when we\u2019ll achieve AGI.\u00a0 Just about a year ago this time we reported on a panel included in the 2017 conference on Machine Learning at the University of Toronto on the theme of \u2018How far away is AGI\u2019.\u00a0 The participants were an impressive group of 7 leading thinkers and investors in AI (including Ben Goertzel and Steve Jurvetson).\u00a0 Here\u2019s what they thought:<\/p>\n<ul>\n<li>5 years to subhuman capability<\/li>\n<li>7 years<\/li>\n<li>13 years maybe (By 2025 we\u2019ll know if we can have it by 2030)<\/li>\n<li>23 years (2040)<\/li>\n<li>30 years (2047)<\/li>\n<li>30 years<\/li>\n<li>30 to 70 years<\/li>\n<\/ul>\n<p>There\u2019s significant disagreement but the median is 23 years (2040) with half the group thinking considerably longer.\u00a0<\/p>\n<\/p>\n<p><span style=\"font-size: 12pt;\"><strong>How Do We Measure Progress and Not Just Final Success<\/strong><\/span><\/p>\n<p>Needless to say full achievement of either of those lists is a tall order.\u00a0 Not all researchers agree to what degree these characteristics are necessary and sufficient before we declare victory.\u00a0 After all we are on the journey to achieve AGI and no one has yet actually seen the destination.<\/p>\n<p>Several tests of final success have been proposed most of which you\u2019ve probably heard of.<\/p>\n<ul>\n<li><a href=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/3371515400?profile=original\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" src=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/3371515400?profile=RESIZE_710x\" width=\"200\" class=\"align-right\"><\/a>The Turing Test: Can a computer convince a human that it is also human.\u00a0 This one is now 69 years old.<\/li>\n<li>The Employment Test: Nils Nilson (2005), a robot should automate economically important jobs.<\/li>\n<li>The Coffee Test: From Steve Wozniak, cofounder of Apple in 2007.\u00a0 When a robot can enter a strange house and make a decent cup of coffee.\u00a0<\/li>\n<li>The Robot College Student: From Ben Goertzel in 2012.\u00a0 When a robot can enroll in a college and earn a degree using the same resources and methods as a human.\u00a0<\/li>\n<\/ul>\n<p>Curiously there don\u2019t seem to have been any significant new tests of final success added in the last 7 years.\u00a0 Is the matter settled?<\/p>\n<p>Actually what seems not to be settled is how to measure our progress toward these goals.\u00a0 Like most progress in our field we ought to be able to see these successes coming some years in advance as incremental improvements allow better performance.\u00a0 But how do we tell if we\u2019re 50% there or 75%?<\/p>\n<\/p>\n<p><span style=\"font-size: 12pt;\"><strong>The Animal-AI Olympics<\/strong><\/span><\/p>\n<p><a href=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/3371515992?profile=original\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" src=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/3371515992?profile=RESIZE_710x\" width=\"250\" class=\"align-right\"><\/a>One interesting approach was floated this last February in a project partnership between the University of Cambridge Leverhulme Center for the Future of Intelligence, and GoodAI, a research institution based in Prague.\u00a0 Their thought is to benchmark the current level of various AIs against different animal species using a variety of already established animal cognition tasks.\u00a0 Hence, the <a href=\"http:\/\/animalaiolympics.com\/\"><em><u>Animal-AI Olympics<\/u><\/em><\/a>.<\/p>\n<p>In June they announced the details of what these tests would be and are now taking submissions from potential competitors.\u00a0 They propose that <a href=\"https:\/\/www.mdcrosby.com\/blog\/animalaieval.html\"><em><u>the following 10 tests<\/u><\/em><\/a> represent increasing levels of difficulty and therefore sophistication in reasoning, for both animal and AI.<\/p>\n<ol>\n<li><strong>Food:<\/strong> A single positive reward.\u00a0 Get as much food as possible within the time limits.<\/li>\n<li><strong>Preferences:<\/strong> Modifies the food test to include a preference selection for getting more food or easier to obtain food.<\/li>\n<li><strong>Obstacles:<\/strong> Some immovable barriers that impede the agent\u2019s navigation require the agent to explore the environment to solve the task.<\/li>\n<li><strong>Avoidance:<\/strong> Introduces \u2018hot zones\u2019 and \u2018death zones\u2019 requiring the agent to avoid negative stimuli.<\/li>\n<li><strong>Spatial Reasoning:<\/strong> Tests for complex navigational abilities and requires some knowledge of simple physics by which the environment operates.<\/li>\n<li><strong>Generalization:<\/strong> Includes variations of the environment that may look superficially different to the agent even though the properties and solutions to problems remain the same.<\/li>\n<li><strong>Internal Models:<\/strong> The agent must be able to store an internal model of the environment. Lights may turn off after a while requiring the agent to remember the layout and navigate in the dark.<\/li>\n<li><strong>Object Permanence:<\/strong> Many animals seem to understand that when an object goes out of sight it still exists. This is a property of our world, and of our environment, but is not necessarily respected by many AI systems. There are many simple interactions that aren&#8217;t possible without understanding object permanence.<\/li>\n<li><strong>Advanced Preferences:<\/strong> Tests the agent&#8217;s ability to make more complex decisions to ensure it gets the highest possible reward. Expect tests with choices that lead to different achievable rewards.<\/li>\n<li><strong>Causal Reasoning:<\/strong> Includes the ability to plan ahead so that the consequences of actions are considered before they are undertaken. All the tests in this category have been passed by some non-human animals, and these include some of the more striking examples of intelligence from across the animal kingdom.<\/li>\n<\/ol>\n<p>This strikes me as valuable to know but not particularly definitive in predicting how far we\u2019ve progressed toward AGI.\u00a0 It also seems to focus exclusively on reinforcement learning.\u00a0 My guess is that 8 out of 10 AGI researchers would probably say reinforcement learning is the most likely path, yet we shouldn\u2019t rule out a breakthrough coming from other efforts like spiking or neuromorphic chips or even literal <a href=\"https:\/\/www.datasciencecentral.com\/profiles\/blogs\/a-wetware-approach-to-artificial-general-intelligence-agi\"><em><u>biological wetware chips<\/u><\/em><\/a>.\u00a0<\/p>\n<p>I\u2019m thinking that those General Dynamics robots get to at least number 6 on that scale and maybe a little higher.\u00a0 Still, it\u2019s good to see someone put a stake in the ground and take a shot at this.\u00a0 I\u2019m anxious to see the results.<\/p>\n<p>\u00a0<\/p>\n<p>\u00a0<\/p>\n<p><strong>Other articles on AGI:<\/strong><\/p>\n<p><a href=\"http:\/\/feeds.feedburner.com\/A%20Wetware%20Approach%20to%20Artificial%20General%20Intelligence%20(AGI)\"><em><u>A Wetware Approach to Artificial General Intelligence (AGI)<\/u><\/em><\/a> <em><u>(2018)<\/u><\/em><\/p>\n<p><a href=\"https:\/\/www.datasciencecentral.com\/profiles\/blogs\/in-search-of-artificial-general-intelligence-agi\"><em><u>In Search of Artificial General Intelligence (AGI)<\/u><\/em><\/a> <em><u>(2017)<\/u><\/em><\/p>\n<p><a href=\"https:\/\/www.datasciencecentral.com\/profiles\/blogs\/artificial-general-intelligence-the-holy-grail-of-ai\"><em><u>Artificial General Intelligence \u2013 The Holy Grail of AI<\/u><\/em><\/a> <em><u>(2016)<\/u><\/em><\/p>\n<p><strong>Other articles on Spiking \/ Neuromorphic Neural Nets<\/strong><\/p>\n<p><a href=\"https:\/\/www.datasciencecentral.com\/profiles\/blogs\/off-the-beaten-path-htm-based-strong-ai-beats-rnns-and-cnns-at-pr\"><em><u>Off the Beaten Path &#8211; HTM-based Strong AI Beats RNNs and CNNs at Prediction and Anomaly Detection<\/u><\/em><\/a><\/p>\n<p><a href=\"https:\/\/www.datasciencecentral.com\/profiles\/blogs\/the-three-way-race-to-the-future-of-ai-quantum-vs-neuromorphic-vs\"><em><u>The Three Way Race to the Future of AI. Quantum vs. Neuromorphic vs. High Performance Computing<\/u><\/em><\/a><\/p>\n<p><a href=\"https:\/\/www.datasciencecentral.com\/profiles\/blogs\/more-on-3rd-generation-spiking-neural-nets\"><em><u>More on 3rd Generation Spiking Neural Nets<\/u><\/em><\/a><\/p>\n<p><a href=\"https:\/\/www.datasciencecentral.com\/profiles\/blogs\/beyond-deep-learning-3rd-generation-neural-nets\"><em><u>Beyond Deep Learning \u2013 3rd Generation Neural Nets<\/u><\/em><\/a><\/p>\n<p>\u00a0<\/p>\n<p><a href=\"https:\/\/www.datasciencecentral.com\/profiles\/blog\/list?user=0h5qapp2gbuf8\"><em><u>Other articles by Bill Vorhies<\/u><\/em><\/a><\/p>\n<p>\u00a0<\/p>\n<p>About the author:\u00a0 Bill is Contributing Editor for Data Science Central.\u00a0 Bill is also President &#038; Chief Data Scientist at Data-Magnum and has practiced as a data scientist since 2001.\u00a0 His articles have been read more than 2 million times.<\/p>\n<p>He can be reached at:<\/p>\n<p><a href=\"mailto:Bill@DataScienceCentral.com\">Bill@DataScienceCentral.com<\/a> <span>or<\/span> <a href=\"mailto:Bill@Data-Magnum.com\">Bill@Data-Magnum.com<\/a><\/p>\n<p><span>\u00a0<\/span><\/p>\n<\/div>\n<p><a href=\"https:\/\/www.datasciencecentral.com\/xn\/detail\/6448529:BlogPost:859706\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: William Vorhies Summary:\u00a0 Artificial General Intelligence (AGI) is still a ways off in the future but surprisingly there\u2019s been very little conversation about how [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2019\/07\/23\/measuring-progress-toward-agi-is-hard\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":467,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[26],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/2389"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=2389"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/2389\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/458"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=2389"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=2389"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=2389"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}