{"id":3000,"date":"2020-01-06T05:00:00","date_gmt":"2020-01-06T05:00:00","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2020\/01\/06\/tool-predicts-how-fast-code-will-run-on-a-chip\/"},"modified":"2020-01-06T05:00:00","modified_gmt":"2020-01-06T05:00:00","slug":"tool-predicts-how-fast-code-will-run-on-a-chip","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2020\/01\/06\/tool-predicts-how-fast-code-will-run-on-a-chip\/","title":{"rendered":"Tool predicts how fast code will run on a chip"},"content":{"rendered":"<p>Author: Rob Matheson | MIT News Office<\/p>\n<div>\n<p>MIT researchers have invented a machine-learning tool that predicts how fast computer chips will execute code from various applications.\u00a0\u00a0<\/p>\n<p>To get code to run as fast as possible, developers and compilers \u2014 programs that translate programming language into machine-readable code \u2014 typically use performance models that run the code through a simulation of given chip architectures.\u00a0<\/p>\n<p>Compilers use that information to automatically optimize code, and developers use it to tackle performance bottlenecks on the microprocessors that will run it. But performance models for machine code are handwritten by a relatively small group of experts and are not properly validated. As a consequence, the simulated performance measurements often deviate from real-life results.\u00a0<\/p>\n<p>In series of conference papers, the researchers describe a novel machine-learning pipeline that automates this process, making it easier, faster, and more accurate. In a\u00a0<a href=\"http:\/\/proceedings.mlr.press\/v97\/mendis19a\/mendis19a.pdf\">paper<\/a>\u00a0presented at the International Conference on Machine Learning in June, the researchers presented Ithemal, a neural-network model that trains on labeled data in the form of \u201cbasic blocks\u201d \u2014 fundamental snippets of computing instructions \u2014 to automatically predict how long it takes a given chip to execute previously unseen basic blocks. Results suggest Ithemal performs far more accurately than traditional hand-tuned models.\u00a0<\/p>\n<p>Then, at the November IEEE International Symposium on Workload Characterization, the researchers\u00a0<a href=\"http:\/\/groups.csail.mit.edu\/commit\/papers\/19\/ithemal-measurement.pdf\">presented<\/a>\u00a0a benchmark suite of basic blocks from a variety of domains, including machine learning, compilers, cryptography, and graphics that can be used to validate performance models. They pooled more than 300,000 of the profiled blocks into an open-source dataset called BHive.\u00a0During their evaluations, Ithemal predicted how fast Intel chips would run code even better than a performance model built by Intel itself.\u00a0<\/p>\n<p>Ultimately, developers and compilers can use the tool to generate code that runs faster and more efficiently on an ever-growing number of diverse and \u201cblack box\u201d chip designs.\u00a0\u201cModern computer processors are opaque, horrendously complicated, and difficult to understand. It is also incredibly challenging to write computer code that executes as fast as possible for these processors,\u201d says co-author Michael Carbin, an assistant professor in the Department of Electrical Engineering and Computer Science (EECS) and a researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL). \u201cThis tool is a big step forward toward fully modeling the performance of these chips for improved efficiency.\u201d<\/p>\n<p>Most recently, in a\u00a0<a href=\"http:\/\/papers.nips.cc\/paper\/9604-compiler-auto-vectorization-with-imitation-learning.pdf\">paper<\/a>\u00a0presented at the NeurIPS conference in December, the team proposed a new technique to automatically generate compiler optimizations.\u00a0\u00a0Specifically, they automatically generate an algorithm, called Vemal, that converts certain code into vectors, which can be used for parallel computing. Vemal outperforms hand-crafted vectorization algorithms used in the LLVM compiler \u2014 a popular compiler used in the industry.<\/p>\n<p><strong>Learning from data<\/strong><\/p>\n<p>Designing performance models by hand can be \u201ca black art,\u201d Carbin says. Intel provides extensive documentation of more than 3,000 pages describing its chips\u2019 architectures. But there currently exists only a small group of experts who will build performance models that simulate the execution of code on those architectures.\u00a0<\/p>\n<p>\u201cIntel\u2019s documents are neither error-free nor complete, and Intel will omit certain things, because it\u2019s proprietary,\u201d Mendis says. \u201cHowever, when you use data, you don\u2019t need to know the documentation. If there\u2019s something hidden you can learn it directly from the data.\u201d<\/p>\n<p>To do so, the researchers clocked the average number of cycles a given microprocessor takes to compute basic block instructions \u2014 basically, the sequence of boot-up, execute, and shut down \u2014 without human intervention. Automating the process enables rapid profiling of hundreds of thousands or millions of blocks.\u00a0<\/p>\n<p><strong>Domain-specific architectures<\/strong><\/p>\n<p>In training, the Ithemal model analyzes millions of automatically profiled basic blocks to learn exactly how different chip architectures will execute computation. Importantly, Ithemal takes raw text as input and does not require manually adding features to the input data. In testing, Ithemal can be fed previously unseen basic blocks and a given chip, and will generate a single number indicating how fast the chip will execute that code.\u00a0<\/p>\n<p>The researchers found Ithemal cut error rates in accuracy \u2014\u00a0meaning the difference between the predicted speed versus real-world speed \u2014\u00a0by 50 percent over traditional hand-crafted models. Further,\u00a0in their next\u00a0paper, they showed that\u00a0Ithemal\u2019s error rate was 10 percent, while the Intel performance-prediction model\u2019s error rate was 20 percent on a variety of basic blocks across multiple different domains.<\/p>\n<p>The tool now makes it easier to quickly learn performance speeds for any new chip architectures, Mendis says. For instance, domain-specific architectures, such as Google\u2019s new Tensor Processing Unit used specifically for neural networks, are now being built but aren\u2019t widely understood. \u201cIf you want to train a model on some new architecture, you just collect more data from that architecture, run it through our profiler, use that information to train Ithemal, and now you have a model that predicts performance,\u201d Mendis says.<\/p>\n<p>Next, the researchers are studying methods to make models interpretable. Much of machine learning is a black box, so it\u2019s not really clear why a particular model made its predictions. \u201cOur model is saying it takes a processor, say, 10 cycles to execute a basic block. Now, we\u2019re trying to figure out why,\u201d Carbin says. \u201cThat\u2019s a fine level of granularity that would be amazing for these types of tools.\u201d<\/p>\n<p>They also hope to use Ithemal to enhance the performance of Vemal even further and achieve better performance automatically.<\/p>\n<\/div>\n<p><a href=\"http:\/\/news.mit.edu\/2020\/tool-how-fast-code-run-chip-0106\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Rob Matheson | MIT News Office MIT researchers have invented a machine-learning tool that predicts how fast computer chips will execute code from various [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2020\/01\/06\/tool-predicts-how-fast-code-will-run-on-a-chip\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":473,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/3000"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=3000"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/3000\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/465"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=3000"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=3000"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=3000"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}