{"id":7167,"date":"2024-03-01T05:00:00","date_gmt":"2024-03-01T05:00:00","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2024\/03\/01\/startup-accelerates-progress-toward-light-speed-computing\/"},"modified":"2024-03-01T05:00:00","modified_gmt":"2024-03-01T05:00:00","slug":"startup-accelerates-progress-toward-light-speed-computing","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2024\/03\/01\/startup-accelerates-progress-toward-light-speed-computing\/","title":{"rendered":"Startup accelerates progress toward light-speed computing"},"content":{"rendered":"<p>Author: Zach Winn | MIT News<\/p>\n<div>\n<p>Our ability to cram ever-smaller transistors onto a chip has enabled today\u2019s age of ubiquitous computing. But that approach is finally running into limits, with some experts <a href=\"https:\/\/www.technologyreview.com\/2020\/02\/24\/905789\/were-not-prepared-for-the-end-of-moores-law\/\" target=\"_blank\" rel=\"noopener\">declaring an end to Moore\u2019s Law<\/a> and a related principle, known as Dennard\u2019s Scaling.<\/p>\n<p>Those developments couldn\u2019t be coming at a worse time. Demand for computing power has skyrocketed in recent years thanks in large part to the rise of artificial intelligence, and it shows no signs of slowing down.<\/p>\n<p>Now Lightmatter, a company founded by three MIT alumni, is continuing the remarkable progress of computing by rethinking the lifeblood of the chip. Instead of relying solely on electricity, the company also uses light for data processing and transport. The company\u2019s first two products, a chip specializing in artificial intelligence operations and an interconnect that facilitates data transfer between chips, use both photons and electrons to drive more efficient operations.<\/p>\n<p>\u201cThe two problems we are solving are \u2018How do chips talk?\u2019 and \u2018How do you do these [AI] calculations?\u2019\u201d Lightmatter co-founder and CEO Nicholas Harris PhD \u201917 says. \u201cWith our first two products, Envise and Passage, we\u2019re addressing both of those questions.\u201d<\/p>\n<p>In a nod to the size of the problem and the demand for AI, Lightmatter raised just north of $300 million in 2023 at a valuation of $1.2 billion. Now the company is demonstrating its technology with some of the largest technology companies in the world in hopes of reducing the massive energy demand of data centers and AI models.<\/p>\n<p>&#8220;We\u2019re going to enable platforms on top of our interconnect technology that are made up of hundreds of thousands of next-generation compute units,\u201d Harris says. \u201cThat simply wouldn\u2019t be possible without the technology that we\u2019re building.\u201d<\/p>\n<p><strong>From idea to $100K<\/strong><\/p>\n<p>Prior to MIT, Harris worked at the semiconductor company Micron Technology, where he studied the fundamental devices behind integrated chips. The experience made him see how the traditional approach for improving computer performance \u2014 cramming more transistors onto each chip \u2014 was hitting its limits.<\/p>\n<p>\u201cI saw how the roadmap for computing was slowing, and I wanted to figure out how I could continue it,\u201d Harris says. \u201cWhat approaches can augment computers? Quantum computing and photonics were two of those pathways.\u201d<\/p>\n<p>Harris came to MIT to work on photonic quantum computing for his PhD under Dirk Englund, an associate professor in the Department of Electrical Engineering and Computer Science. As part of that work, he built silicon-based integrated photonic chips that could send and process information using light instead of electricity.<\/p>\n<p>The work led to dozens of patents and more than 80 research papers in prestigious journals like <em>Nature<\/em>. But another technology also caught Harris\u2019s attention at MIT.<\/p>\n<p>\u201cI remember walking down the hall and seeing students just piling out of these auditorium-sized classrooms, watching relayed live videos of lectures to see professors teach deep learning,\u201d Harris recalls, referring to the artificial intelligence technique. \u201cEverybody on campus knew that deep learning was going to be a huge deal, so I started learning more about it, and we realized that the systems I was building for photonic quantum computing could actually be leveraged to do deep learning.\u201d<\/p>\n<p>Harris had planned to become a professor after his PhD, but he realized he could attract more funding and innovate more quickly through a startup, so he teamed up with Darius Bunandar PhD \u201918, who was also studying in Englund\u2019s lab, and Thomas Graham MBA \u201918. The co-founders successfully launched into the startup world by <a href=\"https:\/\/news.mit.edu\/2017\/mit-100k-optical-chips-ai-computations-light-speed-0518\" target=\"_blank\" rel=\"noopener\">winning<\/a> the 2017 MIT $100K Entrepreneurship Competition.<\/p>\n<p><strong>Seeing the light<\/strong><\/p>\n<p>Lightmatter\u2019s Envise chip takes the part of computing that electrons do well, like memory, and combines it with what light does well, like performing the massive matrix multiplications of deep-learning models.<\/p>\n<p>\u201cWith photonics, you can perform multiple calculations at the same time because the data is coming in on different colors of light,\u201d Harris explains. \u201cIn one color, you could have a photo of a dog. In another color, you could have a photo of a cat. In another color, maybe a tree, and you could have all three of those operations going through the same optical computing unit, this matrix accelerator, at the same time. That drives up operations per area, and it reuses the hardware that&#8217;s there, driving up energy efficiency.\u201d<\/p>\n<p>Passage takes advantage of light\u2019s latency and bandwidth advantages to link processors in a manner similar to how fiber optic cables use light to send data over long distances. It also enables chips as big as entire wafers to act as a single processor. Sending information between chips is central to running the massive server farms that power cloud computing and run AI systems like ChatGPT.<\/p>\n<p>Both products are designed to bring energy efficiencies to computing, which Harris says are needed to keep up with rising demand without bringing huge increases in power consumption.<\/p>\n<p>\u201cBy 2040, some predict that around 80 percent of all energy usage on the planet will be devoted to data centers and computing, and AI is going to be a huge fraction of that,\u201d Harris says. \u201cWhen you look at computing deployments for training these large AI models, they\u2019re headed toward using hundreds of megawatts. Their power usage is on the scale of cities.\u201d<\/p>\n<p>Lightmatter is currently working with chipmakers and cloud service providers for mass deployment. Harris notes that because the company\u2019s equipment runs on silicon, it can be produced by existing semiconductor fabrication facilities without massive changes in process.<\/p>\n<p>The ambitious plans are designed to open up a new path forward for computing that would have huge implications for the environment and economy.<\/p>\n<p>\u201cWe\u2019re going to continue looking at all of the pieces of computers to figure out where light can accelerate them, make them more energy efficient, and faster, and we\u2019re going to continue to replace those parts,\u201d Harris says. \u201cRight now, we\u2019re focused on interconnect with Passage and on compute with Envise. But over time, we\u2019re going to build out the next generation of computers, and it\u2019s all going to be centered around light.\u201d<\/p>\n<\/div>\n<p><a href=\"https:\/\/news.mit.edu\/2024\/startup-lightmatter-accelerates-progress-toward-light-speed-computing-0301\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Zach Winn | MIT News Our ability to cram ever-smaller transistors onto a chip has enabled today\u2019s age of ubiquitous computing. But that approach [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2024\/03\/01\/startup-accelerates-progress-toward-light-speed-computing\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":456,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/7167"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=7167"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/7167\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/459"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=7167"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=7167"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=7167"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}