{"id":1110,"date":"2018-10-02T06:38:56","date_gmt":"2018-10-02T06:38:56","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2018\/10\/02\/ai-and-algorithmorcacy-what-the-future-will-look-like\/"},"modified":"2018-10-02T06:38:56","modified_gmt":"2018-10-02T06:38:56","slug":"ai-and-algorithmorcacy-what-the-future-will-look-like","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2018\/10\/02\/ai-and-algorithmorcacy-what-the-future-will-look-like\/","title":{"rendered":"AI and Algorithmorcacy: What the Future Will Look Like"},"content":{"rendered":"<p>Author: ajit jaokar<\/p>\n<div>\n<h1><a href=\"http:\/\/api.ning.com\/files\/PoXI9KA5a6aXqA-8B9Sd*uccTvM6fnhh77W887IzVauQ6A1uOcIJdSb-0YbK99c0GDeWqSfv3k4J-lgV8WK0iZACMYLfwVFx\/atheniandemocracy.PNG\" target=\"_self\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/api.ning.com\/files\/PoXI9KA5a6aXqA-8B9Sd*uccTvM6fnhh77W887IzVauQ6A1uOcIJdSb-0YbK99c0GDeWqSfv3k4J-lgV8WK0iZACMYLfwVFx\/atheniandemocracy.PNG\" width=\"469\" class=\"align-full\" height=\"374\"><\/a><\/h1>\n<h1><a name=\"_Toc525745078\"><\/a>Introduction<\/h1>\n<p>With the recent news about Facebook and <a href=\"https:\/\/www.theguardian.com\/news\/2018\/mar\/26\/the-cambridge-analytica-files-the-story-so-far\">Cambridge analytica<\/a>, we are rightly concerned about the power and impact of algorithms to shape political debate and more generally, our lives. The <a href=\"https:\/\/en.wikipedia.org\/wiki\/Social_Credit_System\">social score model in China<\/a> shows another way in which AI could influence all aspects of society. Based on these and other views, most policy makers in the West take a negative view of AI and the power of algorithms in society. \u00a0In this post, I present a different, more optimistic view of the impact of AI on society where <strong>AI could be a part of the solution to overcome the problem of Algorithmocracy and filter bubbles<\/strong>. I discussed some of the ideas below Last week, I spoke at the <a href=\"https:\/\/events.economist.com\/events-conferences\/emea\/innovation-summit-europe\/#agenda\">Economist innovation summit<\/a> in London.\u00a0 Note that the views presented in this article are mine alone and are not related to any organization I am associated with. The scope of the article is confined to the impact on policy and democracy. It is not related to other aspects of the filter bubble (ex: recommendations for products)<\/p>\n<h2>Filter bubbles and Algorithmocracy<\/h2>\n<p>The term Algorithmocracy has been proposed by Eli Pariser to explain the idea of filter bubbles. The idea can be <a href=\"http:\/\/thoughtfulcampaigner.org\/will-campaigning-like-2040\/\">elaborated as<\/a> : \u00a0<em>the power of computers will be able to crunch so much data that we\u2019ll no longer need decisions to be made by a form of representative democracy, but instead from publicly held data points \u2013 what you say on Facebook (or whatever has replaced Facebook) will define the decisions that are made.<\/em><\/p>\n<p>Farnham street elaborates the idea of <a href=\"https:\/\/fs.blog\/2017\/07\/filter-bubbles\/\">Filter bubbles and Algorithmocracy<\/a><\/p>\n<p>Algorithms create \u201ca unique universe of information for each of us \u2026 which fundamentally alters the way we encounter ideas and information.\u201d (Eli Pariser). Filter bubbles create echo chambers. We assume that everyone thinks like us and this makes us forget other perspectives. This happens because the Internet tends to give us what we want based on our past(data) preferences. The algorithms act as a one-way mirror reflecting and amplifying our views. Personalization via algorithms is bad for democracy because Democracy requires citizens to see things from one another\u2019s point of view. We also lose track of facts and we rely instead on opinions. \u00a0<\/p>\n<p>\u00a0<\/p>\n<h2><a name=\"_Toc525745079\"><\/a>Algorithmocracy and the impact on democracy<\/h2>\n<p>The risks of populist opinions and limited viewpoints hijacking democracy were well known from the outset.\u00a0 Today, the percentage of people who say it is essential to live in a liberal democracy is declining. The problem is explained in <a href=\"https:\/\/www.theatlantic.com\/magazine\/archive\/2018\/10\/james-madison-mob-rule\/568351\/\">America Is Living James Madison\u2019s Nightmare (The Atlantic)<\/a>. The Founders of democracy designed a government that would resist mob rule. But they did not anticipate how strong the mob could become. Direct democracies risk being hijacked by populist opinion as the article says: <strong><em>\u201cHad every Athenian citizen been a Socrates, every Athenian assembly would still have been a mob.\u201d<\/em><\/strong> Hence, we have several safeguards to protect democracy such as representative democracy, plurality in media etc. The power of the mob opinion was compounded by the introduction of media formals which were not based on text. This problem is discussed eloquently in one of my favourite books of all time by Neil Postman &#8211; <a href=\"https:\/\/en.wikipedia.org\/wiki\/Amusing_Ourselves_to_Death\">amusing ourselves to death<\/a><span>.<\/span><\/p>\n<p>Neil Postman considers the eighteenth century, the &#8220;Age of Reason&#8221; and the pinnacle for rational argument because of the medium of debate (i.e. the written word). The introduction of media such as Television changed that dynamic by shifting the emphasis on presentation \u2013 rather than on the content.<\/p>\n<p>With social media and the Internet, we have shifted the debate (if you can still call it that!) to warp speed. The debate is also dominated by the mob rule and passions rather than a rational deliberation. More to the point, social media creates bubbles and echo chambers in which citizens only see their own views reflected back on them (thereby preventing any rational discussion). Long term this could profoundly change our society in a dramatic way. We all worry about Orwell \u2013 (1984) i.e. an external entity that controls us. But the bigger issue may be the one raised by Huxley (<a href=\"https:\/\/en.wikipedia.org\/wiki\/Brave_New_World\">Aldous Huxley \u2013 Brave new world<\/a>) where we <strong><em>voluntarily<\/em><\/strong> hand over control to an external entity(in this case AI) due to our own <strong><em>\u2018infinite capacity for distraction\u2019<\/em><\/strong> as per Huxley. (re <a href=\"https:\/\/highexistence.com\/amusing-ourselves-to-death-huxley-vs-orwell\/\">Huxley vs Orwell<\/a>)<\/p>\n<p>\u00a0<\/p>\n<h2><a name=\"_Toc525745080\"><\/a>Some observations<\/h2>\n<p>Here are some observations<\/p>\n<ol>\n<li><strong>Filter bubbles are a human problem<\/strong> \u2013 not an algorithm problem because algorithms reflect human preferences. We are blaming AI for the failings of people. At best, filter bubbles could be seen as <strong><em>a limitation of supervised learning algorithms<\/em><\/strong> \u2013 but fundamentally \u2013 the data drives the algorithm and humans drive the data<\/li>\n<li><strong>Cambridge analytica and it\u2019s likes are not legal even as per current legislation<\/strong>. These are also not strictly a problem of an algorithm<\/li>\n<li><strong>Limitations of the Media format<\/strong> as explained above also stem from the format itself which encourages a rapid response over a thoughtful one. Once again, not related to the Algorithm<\/li>\n<li>Techniques to overcome <strong>clickbait<\/strong> are also part of the current techniques to overcome the limits of social media (and not related to algorithms per-se)<\/li>\n<\/ol>\n<p>\u00a0<\/p>\n<p>So, the best way of promoting a return to a thoughtful and balanced discussion maybe through <strong>education \u2013 not in the academic sense but more in the sense of diverse and validated viewpoints and promoting systems thinking<\/strong><\/p>\n<p>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0\u00a0<\/p>\n<h2><a name=\"_Toc525745081\"><\/a>AI as a lens for democracy to overcome Algorithmocracy<\/h2>\n<p>Hence, the question is: <strong><u>Can AI be part of such an informed education led solution to overcome Algorithmocracy and Filter Bubbles?<\/u><\/strong><\/p>\n<p>Kahneman points out in <a href=\"https:\/\/en.wikipedia.org\/wiki\/Thinking,_Fast_and_Slow\">Thinking fast and slow<\/a> the pitfalls of <a href=\"https:\/\/en.wikipedia.org\/wiki\/List_of_cognitive_biases\">cognitive biases.<\/a> Cognitive biases are mental short cuts which are often not accurate. <strong>AI that can be trained to understand and interpret cognitive biases<\/strong> could provide a \u2018slow\u2019 (more thoughtful\/ considering all options \u2013 more nuanced approach). Already, an\u00a0 algorithmic approach is providing an <a href=\"https:\/\/www.cio.com\/article\/3152798\/artificial-intelligence\/how-artificial-intelligence-can-eliminate-bias-in-hiring.html\">objective approach to recruiting using AI<\/a><\/p>\n<p>This will work if we do not overload AI with our own biases.<\/p>\n<p>Overall. we worry about biases of AI, but <strong><em>we don\u2019t talk of projecting our own biases on to AI (such as religion).\u00a0<\/em><\/strong> Take religion. All religion is inherently faith based. An acceptance of faith implies a suspension of reason. From an AI perspective, Religion hence does not \u2018compute\u2019. Religion is a human choice(bias). But if AI rejects that bias, then AI risks alienating vast swathes of humanity.<\/p>\n<p>Hence, each of our biases contribute to the filter bubble but if we design the algorithm itself to overcome all cognitive biases (and its worth looking at this list of <a href=\"https:\/\/en.wikipedia.org\/wiki\/List_of_cognitive_biases\">cognitive biases.) \u00a0<\/a><\/p>\n<p>AI will become a part of the solution through creating a sense of education through awareness<\/p>\n<h2><a name=\"_Toc525745082\"><\/a>Conclusion<\/h2>\n<p>In this post, we present a more granular and a balanced view of AI and Democracy. We also present how AI could be a part of the solution to overcome the problems of Algorithmocracy and Filter bubbles. \u00a0Overall, I remain an AI optimist \u2013 a position that is not easy to adopt! However, AI can be seen as overcoming the current scenario of social media driven filter bubbles.<\/p>\n<p>\u00a0<\/p>\n<p>Image: <a href=\"https:\/\/en.wikipedia.org\/wiki\/Athenian_democracy\">Athenian democracy<\/a> \u2013 the earliest forms of democracy. By <a href=\"https:\/\/commons.wikimedia.org\/w\/index.php?curid=7725777\">Philipp Foltz<\/a> &#8211; <a href=\"http:\/\/www.ancientgreekbattles.net\/...\/Pericles.htm\">www.ancientgreekbattles.net\/&#8230;\/Pericles.htm<\/a>, Public Domain,<\/p>\n<\/div>\n<p><a href=\"https:\/\/www.datasciencecentral.com\/xn\/detail\/6448529:BlogPost:764388\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: ajit jaokar Introduction With the recent news about Facebook and Cambridge analytica, we are rightly concerned about the power and impact of algorithms to [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2018\/10\/02\/ai-and-algorithmorcacy-what-the-future-will-look-like\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":1111,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[26],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/1110"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=1110"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/1110\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/1111"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=1110"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=1110"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=1110"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}