{"id":8803,"date":"2026-01-30T21:50:00","date_gmt":"2026-01-30T21:50:00","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2026\/01\/30\/the-philosophical-puzzle-of-rational-artificial-intelligence\/"},"modified":"2026-01-30T21:50:00","modified_gmt":"2026-01-30T21:50:00","slug":"the-philosophical-puzzle-of-rational-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2026\/01\/30\/the-philosophical-puzzle-of-rational-artificial-intelligence\/","title":{"rendered":"The philosophical puzzle of rational artificial intelligence"},"content":{"rendered":"<p>Author: Amanda Diehl | MIT Schwarzman College of Computing<\/p>\n<div>\n<p>To what extent can an artificial system be rational?<\/p>\n<p>A new MIT course, <a href=\"https:\/\/computing.mit.edu\/cross-cutting\/common-ground-for-computing-education\/common-ground-subjects\/ai-and-rationality\/\">6.S044\/24.S00<\/a> (AI and Rationality), doesn\u2019t seek to answer this question. Instead, it challenges students to explore this and other philosophical problems through the lens of AI research. For the next generation of scholars, concepts of rationality and agency could prove integral in AI decision-making, especially when influenced by how humans understand their own cognitive limits and their constrained, subjective views of what is or isn\u2019t rational.<\/p>\n<p>This inquiry is rooted in a deep relationship between computer science and philosophy, which have long collaborated in formalizing what it is to form rational beliefs, learn from experience, and make rational decisions in pursuit of one&#8217;s goals.<\/p>\n<p>\u201cYou\u2019d imagine computer science and philosophy are pretty far apart, but they\u2019ve always intersected. The technical parts of philosophy really overlap with AI, especially early AI,\u201d says course instructor Leslie Kaelbling, the Panasonic Professor of Computer Science and Engineering at MIT, calling to mind Alan Turing, who was both a computer scientist and a philosopher. Kaelbling herself holds an undergraduate degree in philosophy from Stanford University, noting that computer science wasn\u2019t available as a major at the time.<\/p>\n<p>Brian Hedden, a professor in the Department of Linguistics and Philosophy, holding an MIT Schwarzman College of Computing shared position with the Department of Electrical Engineering and Computer Science (EECS), who teaches the class with Kaelbling, notes that the two disciplines are more aligned than people might imagine, adding that the \u201cdifferences are in emphasis and perspective.\u201d<\/p>\n<p><strong>Tools for further theoretical thinkin<\/strong>g<\/p>\n<p>Offered for the first time in fall 2025, Kaelbling and Hedden created AI and Rationality as part of the <a href=\"https:\/\/computing.mit.edu\/cross-cutting\/common-ground-for-computing-education\/\">Common Ground for Computing Education,<\/a> a cross-cutting initiative of the MIT Schwarzman College of Computing that brings multiple departments together to develop and teach new courses and launch new programs that blend computing with other disciplines.<\/p>\n<p>With over two dozen students registered, AI and Rationality is one of two Common Ground classes with a foundation in philosophy, the other being <a href=\"https:\/\/news.mit.edu\/2025\/bridging-philosophy-and-ai-to-explore-computing-ethics-0211\">6.C40\/24.C40 (Ethics of Computing)<\/a>.<\/p>\n<p>While Ethics of Computing explores concerns about the societal impacts of rapidly advancing technology, AI and Rationality examines the disputed definition of rationality by considering several components: the nature of rational agency, the concept of a fully autonomous and intelligent agent, and the ascription of beliefs and desires onto these systems.<\/p>\n<p>Because AI is extremely broad in its implementation and each use case raises different issues, Kaelbling and Hedden brainstormed topics that could provide fruitful discussion and engagement between the two perspectives of computer science and philosophy.<\/p>\n<p>\u201cIt&#8217;s important when I work with students studying machine learning or robotics that they step back a bit and examine the assumptions they\u2019re making,\u201d Kaelbling says. \u201cThinking about things from a philosophical perspective helps people back up and understand better how to situate their work in actual context.\u201d<\/p>\n<p>Both instructors stress that this isn\u2019t a course that provides concrete answers to questions on what it means to engineer a rational agent.<\/p>\n<p>Hedden says, \u201cI see the course as building their foundations. We\u2019re not giving them a body of doctrine to learn and memorize and then apply. We\u2019re equipping them with tools to think about things in a critical way as they go out into their chosen careers, whether they\u2019re in research or industry or government.\u201d<\/p>\n<p>The rapid progress of AI also presents a new set of challenges in academia. Predicting what students may need to know five years from now is something Kaelbling sees as an impossible task. \u201cWhat we need to do is give them the tools at a higher level \u2014 the habits of mind, the ways of thinking \u2014 that will help them approach the stuff that we really can\u2019t anticipate right now,\u201d she says.<\/p>\n<p><strong>Blending disciplines and questioning assumptions<\/strong><\/p>\n<p>So far, the class has drawn students from a wide range of disciplines \u2014 from those firmly grounded in computing to others interested in exploring how AI intersects with their own fields of study.<\/p>\n<p>Throughout the semester\u2019s reading and discussions, students grappled with different definitions of rationality and how they pushed back against assumptions in their fields.<\/p>\n<p>On what surprised her about the course, Amanda Paredes Rioboo, a senior in EECS, says, \u201cWe\u2019re kind of taught that math and logic are this golden standard or truth. This class showed us a variety of examples that humans act inconsistently with these mathematical and logical frameworks. We opened up this whole can of worms as to whether, is it humans that are irrational? Is it the machine learning systems that we designed that are irrational? Is it math and logic itself?\u201d<\/p>\n<p>Junior Okoroafor, a PhD student in the Department of Brain and Cognitive Sciences, was appreciative of the class\u2019s challenges and the ways in which the definition of a rational agent could change depending on the discipline. \u201cRepresenting what each field means by rationality in a formal framework, makes it clear exactly which assumptions are to be shared, and which were different, across fields.\u201d<\/p>\n<p>The co-teaching, collaborative structure of the course, as with all Common Ground endeavors, gave students and the instructors opportunities to hear different perspectives in real-time.<\/p>\n<p>For Paredes Rioboo, this is her third Common Ground course. She says, \u201cI really like the interdisciplinary aspect. They\u2019ve always felt like a nice mix of theoretical and applied from the fact that they need to cut across fields.\u201d<\/p>\n<p>According to Okoroafor, Kaelbling and Hedden demonstrated an obvious synergy between fields, saying that it felt as if they were engaging and learning along with the class. How computer science and philosophy can be used to inform each other allowed him to understand their commonality and invaluable perspectives on intersecting issues.<\/p>\n<p>He adds, \u201cphilosophy also has a way of surprising you.\u201d<\/p>\n<\/div>\n<p><a href=\"https:\/\/news.mit.edu\/2026\/philosophical-puzzle-rational-artificial-intelligence-0130\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Amanda Diehl | MIT Schwarzman College of Computing To what extent can an artificial system be rational? A new MIT course, 6.S044\/24.S00 (AI and [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2026\/01\/30\/the-philosophical-puzzle-of-rational-artificial-intelligence\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":471,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/8803"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=8803"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/8803\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/461"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=8803"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=8803"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=8803"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}