{"id":5189,"date":"2021-11-09T06:29:38","date_gmt":"2021-11-09T06:29:38","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2021\/11\/09\/the-transformer-model\/"},"modified":"2021-11-09T06:29:38","modified_gmt":"2021-11-09T06:29:38","slug":"the-transformer-model","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2021\/11\/09\/the-transformer-model\/","title":{"rendered":"The Transformer Model"},"content":{"rendered":"<p>Author: Stefania Cristina<\/p>\n<div>\n<p>We have already familiarized ourselves with the concept of self-attention as implemented by the Transformer attention mechanism for neural machine translation. We will now be shifting our focus on the details of the Transformer architecture itself, to discover how self-attention can be implemented without relying on the use of recurrence and convolutions.<\/p>\n<p>In this tutorial, you will discover the network architecture of the Transformer model.<\/p>\n<p>After completing this tutorial, you will know:<\/p>\n<ul>\n<li>How the Transformer architecture implements an encoder-decoder structure without recurrence and convolutions.<span class=\"Apple-converted-space\">\u00a0<\/span>\n<\/li>\n<li>How the Transformer encoder and decoder work.<span class=\"Apple-converted-space\">\u00a0<\/span>\n<\/li>\n<li>How the Transformer self-attention compares to the use of recurrent and convolutional layers.<span class=\"Apple-converted-space\">\u00a0<\/span>\n<\/li>\n<\/ul>\n<p>Let\u2019s get started.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<div id=\"attachment_13041\" style=\"width: 1034px\" class=\"wp-caption aligncenter\">\n<a href=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/10\/transformer_cover-1-scaled.jpg\"><img decoding=\"async\" aria-describedby=\"caption-attachment-13041\" loading=\"lazy\" class=\"wp-image-13041 size-large\" src=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/10\/transformer_cover-1-1024x785.jpg\" alt=\"\" width=\"1024\" height=\"785\" srcset=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/10\/transformer_cover-1-1024x785.jpg 1024w, https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/10\/transformer_cover-1-300x230.jpg 300w, https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/10\/transformer_cover-1-768x589.jpg 768w, https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/10\/transformer_cover-1-1536x1178.jpg 1536w, https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/10\/transformer_cover-1-2048x1571.jpg 2048w\" sizes=\"(max-width: 1024px) 100vw, 1024px\"><\/a><\/p>\n<p id=\"caption-attachment-13041\" class=\"wp-caption-text\">The Transformer Model <br \/>Photo by <a href=\"https:\/\/unsplash.com\/photos\/vuMTQj6aQQ0\">Samule Sun<\/a>, some rights reserved.<\/p>\n<\/div>\n<h2><b>Tutorial Overview<\/b><\/h2>\n<p>This tutorial is divided into three parts; they are:<\/p>\n<ul>\n<li>The Transformer Architecture\n<ul>\n<li>The Encoder<\/li>\n<li>The Decoder<\/li>\n<\/ul>\n<\/li>\n<li>Sum Up: The Transformer Model<\/li>\n<li>Comparison to Recurrent and Convolutional Layers<\/li>\n<\/ul>\n<h2><b>Prerequisites<\/b><\/h2>\n<p>For this tutorial, we assume that you are already familiar with:<\/p>\n<ul>\n<li><a href=\"https:\/\/machinelearningmastery.com\/what-is-attention\/\">The concept of attention<\/a><\/li>\n<li><a href=\"https:\/\/machinelearningmastery.com\/the-attention-mechanism-from-scratch\/\">The attention mechanism<\/a><\/li>\n<li><a href=\"https:\/\/machinelearningmastery.com\/the-transformer-attention-mechanism\">The Transfomer attention mechanism<\/a><\/li>\n<\/ul>\n<h2><b>The Transformer Architecture<\/b><\/h2>\n<p>The Transformer architecture follows an encoder-decoder structure, but does not rely on recurrence and convolutions in order to generate an output.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<div id=\"attachment_12821\" style=\"width: 379px\" class=\"wp-caption aligncenter\">\n<a href=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/08\/attention_research_1.png\"><img decoding=\"async\" aria-describedby=\"caption-attachment-12821\" loading=\"lazy\" class=\"wp-image-12821\" src=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/08\/attention_research_1-727x1024.png\" alt=\"\" width=\"369\" height=\"519\" srcset=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/08\/attention_research_1-727x1024.png 727w, https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/08\/attention_research_1-213x300.png 213w, https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/08\/attention_research_1-768x1082.png 768w, https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/08\/attention_research_1-1090x1536.png 1090w, https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/08\/attention_research_1.png 1320w\" sizes=\"(max-width: 369px) 100vw, 369px\"><\/a><\/p>\n<p id=\"caption-attachment-12821\" class=\"wp-caption-text\">The Encoder-Decoder Structure of the Transformer Architecture <br \/>Taken from \u201c<a href=\"https:\/\/arxiv.org\/abs\/1706.03762\">Attention Is All You Need<\/a>\u201c<\/p>\n<\/div>\n<p>In a nutshell, the task of the encoder, on the left half of the Transformer architecture, is to map an input sequence to a sequence of continuous representations, which is then fed into a decoder.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<p>The decoder, on the right half of the architecture, receives the output of the encoder together with the decoder output at the previous time step, to generate an output sequence.<\/p>\n<blockquote>\n<p><i>At each step the model is auto-regressive, consuming the previously generated symbols as additional input when generating the next.<\/i><\/p>\n<p><i>\u2013 <\/i><a href=\"https:\/\/arxiv.org\/abs\/1706.03762\">Attention Is All You Need<\/a>, 2017.<\/p>\n<\/blockquote>\n<h3><b>The Encoder<\/b><\/h3>\n<div id=\"attachment_13039\" style=\"width: 379px\" class=\"wp-caption aligncenter\">\n<a href=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/10\/transformer_1.png\"><img decoding=\"async\" aria-describedby=\"caption-attachment-13039\" loading=\"lazy\" class=\"wp-image-13039\" src=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/10\/transformer_1-727x1024.png\" alt=\"\" width=\"369\" height=\"520\" srcset=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/10\/transformer_1-727x1024.png 727w, https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/10\/transformer_1-213x300.png 213w, https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/10\/transformer_1-768x1082.png 768w, https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/10\/transformer_1-1090x1536.png 1090w, https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/10\/transformer_1.png 1320w\" sizes=\"(max-width: 369px) 100vw, 369px\"><\/a><\/p>\n<p id=\"caption-attachment-13039\" class=\"wp-caption-text\">The Encoder Block of the Transformer Architecture <br \/>Taken from \u201c<a href=\"https:\/\/arxiv.org\/abs\/1706.03762\">Attention Is All You Need<\/a>\u201c<\/p>\n<\/div>\n<p>The encoder consists of a stack of $N$ = 6 identical layers, where each layer is composed of two sublayers:<\/p>\n<ol>\n<li>The first sublayer implements a multi-head self-attention mechanism. <a href=\"https:\/\/machinelearningmastery.com\/the-transformer-attention-mechanism\">We had seen<\/a> that the multi-head mechanism implements $h$ heads that receive a (different) linearly projected version of the queries, keys and values each, to produce $h$ outputs in parallel that are then used to generate a final result.<span class=\"Apple-converted-space\">\u00a0<\/span>\n<\/li>\n<\/ol>\n<ol start=\"2\">\n<li>The second sublayer is a fully connected feed-forward network, consisting of two linear transformations with Rectified Linear Unit (ReLU) activation in between:<\/li>\n<\/ol>\n<p style=\"text-align: center;\">$$text{FFN}(x) = text{ReLU}(mathbf{W}_1 x + b_1) mathbf{W}_2 + b_2$$<\/p>\n<p>The six layers of the Transformer encoder apply the same linear transformations to all of the words in the input sequence, but <i>each<\/i> layer employs different weight ($mathbf{W}_1, mathbf{W}_2$) and bias ($b_1, b_2$) parameters to do so.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<p>Furthermore, each of these two sublayers has a residual connection around it.<\/p>\n<p>Each sublayer is also succeeded by a normalization layer, $text{layernorm}(.)$, which normalizes the sum computed between the sublayer input, $x$, and the output generated by the sublayer itself, $text{sublayer}(x)$:<\/p>\n<p style=\"text-align: center;\">$$text{layernorm}(x + text{sublayer}(x))$$<\/p>\n<p>An important consideration to keep in mind is that the Transformer architecture cannot inherently capture any information about the relative positions of the words in the sequence, since it does not make use of recurrence. This information has to be injected by introducing <i>positional encodings<\/i> to the input embeddings.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<p>The positional encoding vectors are of the same dimension as the input embeddings, and are generated using sine and cosine functions of different frequencies. Then, they are simply summed to the input embeddings in order to <i>inject<\/i> the positional information.<\/p>\n<h3><b>The Decoder<span class=\"Apple-converted-space\">\u00a0<\/span><\/b><\/h3>\n<div id=\"attachment_13040\" style=\"width: 379px\" class=\"wp-caption aligncenter\">\n<a href=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/10\/transformer_2.png\"><img decoding=\"async\" aria-describedby=\"caption-attachment-13040\" loading=\"lazy\" class=\"wp-image-13040\" src=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/10\/transformer_2-727x1024.png\" alt=\"\" width=\"369\" height=\"520\" srcset=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/10\/transformer_2-727x1024.png 727w, https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/10\/transformer_2-213x300.png 213w, https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/10\/transformer_2-768x1082.png 768w, https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/10\/transformer_2-1090x1536.png 1090w, https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/10\/transformer_2.png 1320w\" sizes=\"(max-width: 369px) 100vw, 369px\"><\/a><\/p>\n<p id=\"caption-attachment-13040\" class=\"wp-caption-text\">The Decoder Block of the Transformer Architecture <br \/>Taken from \u201c<a href=\"https:\/\/arxiv.org\/abs\/1706.03762\">Attention Is All You Need<\/a>\u201c<\/p>\n<\/div>\n<p>The decoder shares several similarities with the encoder.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<p>The decoder also consists of a stack of $N$ = 6 identical layers that are, each, composed of three sublayers:<\/p>\n<ol>\n<li>The first sublayer receives the previous output of the decoder stack, augments it with positional information, and implements multi-head self-attention over it. While the encoder is designed to attend to all words in the input sequence, <i>regardless<\/i> of their position in the sequence, the decoder is modified to attend <i>only<\/i> to the preceding words. Hence, the prediction for a word at position, $i$, can only depend on the known outputs for the words that come before it in the sequence.<span class=\"Apple-converted-space\">\u00a0<\/span>In the multi-head attention mechanism (which implements multiple, single attention functions in parallel), this is achieved by introducing a mask over the values produced by the scaled multiplication of matrices $mathbf{Q}$ and $mathbf{K}$. This masking is implemented by suppressing the matrix values that would, otherwise, correspond to illegal connections:<\/li>\n<\/ol>\n<p style=\"text-align: center;\">$$<br \/>\ntext{mask}(mathbf{QK}^T) =<br \/>\ntext{mask} left( begin{bmatrix}<br \/>\ne_{11} &amp; e_{12} &amp; dots &amp; e_{1n} \\<br \/>\ne_{21} &amp; e_{22} &amp; dots &amp; e_{2n} \\<br \/>\nvdots &amp; vdots &amp; ddots &amp; vdots \\<br \/>\ne_{m1} &amp; e_{m2} &amp; dots &amp; e_{mn} \\<br \/>\nend{bmatrix} right) =<br \/>\nbegin{bmatrix}<br \/>\ne_{11} &amp; -infty &amp; dots &amp; -infty \\<br \/>\ne_{21} &amp; e_{22} &amp; dots &amp; -infty \\<br \/>\nvdots &amp; vdots &amp; ddots &amp; vdots \\<br \/>\ne_{m1} &amp; e_{m2} &amp; dots &amp; e_{mn} \\<br \/>\nend{bmatrix}<br \/>\n$$<\/p>\n<p>\u00a0<\/p>\n<div id=\"attachment_12893\" style=\"width: 187px\" class=\"wp-caption aligncenter\">\n<a href=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/09\/tour_3.png\"><img decoding=\"async\" aria-describedby=\"caption-attachment-12893\" loading=\"lazy\" class=\"wp-image-12893\" src=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/09\/tour_3-609x1024.png\" alt=\"\" width=\"177\" height=\"298\" srcset=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/09\/tour_3-609x1024.png 609w, https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/09\/tour_3-178x300.png 178w, https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/09\/tour_3.png 627w\" sizes=\"(max-width: 177px) 100vw, 177px\"><\/a><\/p>\n<p id=\"caption-attachment-12893\" class=\"wp-caption-text\">The Multi-Head Attention in the Decoder Implements Several Masked, Single Attention Functions <br \/>Taken from \u201c<a href=\"https:\/\/arxiv.org\/abs\/1706.03762\">Attention Is All You Need<\/a>\u201c<\/p>\n<\/div>\n<blockquote>\n<p><i>The masking makes the decoder\u00a0unidirectional (unlike the bidirectional encoder).<\/i><\/p>\n<p><i>\u2013<span class=\"Apple-converted-space\">\u00a0 <\/span><\/i><a href=\"https:\/\/www.amazon.com\/Advanced-Deep-Learning-Python-next-generation\/dp\/178995617X\">Advanced Deep Learning with Python<\/a>, 2019.<\/p>\n<\/blockquote>\n<ol start=\"2\">\n<li>The second layer implements a multi-head self-attention mechanism, which is similar to the one implemented in the first sublayer of the encoder.<span class=\"Apple-converted-space\">\u00a0<\/span>On the decoder side, this multi-head mechanism receives the queries from the previous decoder sublayer, and the keys and values from the output of the encoder. This allows the decoder to attend to all of the words in the input sequence.<\/li>\n<\/ol>\n<ol start=\"3\">\n<li>The third layer implements a fully connected feed-forward network, which is similar to the one implemented in the second sublayer of the encoder.<\/li>\n<\/ol>\n<p>Furthermore, the three sublayers on the decoder side also have residual connections around them, and are succeeded by a normalization layer.<\/p>\n<p>Positional encodings are also added to the input embeddings of the decoder, in the same manner as previously explained for the encoder.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<h2><b>Sum Up: The Transformer Model<\/b><\/h2>\n<p>The Transformer model runs as follows:<\/p>\n<ol>\n<li>Each word forming an input sequence is transformed into a $d_{text{model}}$-dimensional embedding vector.<span class=\"Apple-converted-space\">\u00a0<\/span>\n<\/li>\n<\/ol>\n<ol start=\"2\">\n<li>Each embedding vector representing an input word is augmented by summing it (element-wise) to a positional encoding vector of the same $d_{text{model}}$ length, hence introducing positional information into the input.<span class=\"Apple-converted-space\">\u00a0<\/span>\n<\/li>\n<\/ol>\n<ol start=\"3\">\n<li>The augmented embedding vectors are fed into the encoder block, consisting of the two sublayers explained above. Since the encoder attends to all words in the input sequence, irrespective if they precede or succeed the word under consideration, then the Transformer encoder is <i>bidirectional<\/i>.<span class=\"Apple-converted-space\">\u00a0<\/span>\n<\/li>\n<\/ol>\n<ol start=\"4\">\n<li>The decoder receives as input its own predicted output word at time-step, $t \u2013 1$.<\/li>\n<\/ol>\n<ol start=\"5\">\n<li>The input to the decoder is also augmented by positional encoding, in the same manner as this is done on the encoder side.<span class=\"Apple-converted-space\">\u00a0<\/span>\n<\/li>\n<\/ol>\n<ol start=\"6\">\n<li>The augmented decoder input is fed into the three sublayers comprising the decoder block explained above. Masking is applied in the first sublayer, in order to stop the decoder from attending to succeeding words. At the second sublayer, the decoder also receives the output of the encoder, which now allows the decoder to attend to all of the words in the input sequence.<\/li>\n<\/ol>\n<ol start=\"7\">\n<li>The output of the decoder finally passes through a fully connected layer, followed by a softmax layer, to generate a prediction for the next word of the output sequence.<span class=\"Apple-converted-space\">\u00a0<\/span>\n<\/li>\n<\/ol>\n<h2><b>Comparison to Recurrent and Convolutional Layers<\/b><\/h2>\n<p><a href=\"https:\/\/arxiv.org\/abs\/1706.03762\">Vaswani et al. (2017)<\/a> explain that their motivation for abandoning the use of recurrence and convolutions was based on several factors:<\/p>\n<ol>\n<li>Self-attention layers were found to be faster than recurrent layers for shorter sequence lengths, and can be restricted to consider only a neighbourhood in the input sequence for very long sequence lengths.<span class=\"Apple-converted-space\">\u00a0<\/span>\n<\/li>\n<\/ol>\n<ol start=\"2\">\n<li>The number of sequential operations required by a recurrent layer is based upon the sequence length, whereas this number remains constant for a self-attention layer.<span class=\"Apple-converted-space\">\u00a0<\/span>\n<\/li>\n<\/ol>\n<ol start=\"3\">\n<li>In convolutional neural networks, the kernel width directly affects the long-term dependencies that can be established between pairs of input and output positions. Tracking long-term dependencies would require the use of large kernels, or stacks of convolutional layers that could increase the computational cost.<\/li>\n<\/ol>\n<h2><b>Further Reading<\/b><\/h2>\n<p>This section provides more resources on the topic if you are looking to go deeper.<\/p>\n<h3><b>Books<\/b><\/h3>\n<ul>\n<li>\n<a href=\"https:\/\/www.amazon.com\/Advanced-Deep-Learning-Python-next-generation\/dp\/178995617X\">Advanced Deep Learning with Python<\/a>, 2019.<\/li>\n<\/ul>\n<h3><b>Papers<\/b><\/h3>\n<ul>\n<li>\n<a href=\"https:\/\/arxiv.org\/abs\/1706.03762\">Attention Is All You Need<\/a>, 2017.<\/li>\n<\/ul>\n<h2><b>Summary<\/b><\/h2>\n<p>In this tutorial, you discovered the network architecture of the Transformer model.<\/p>\n<p>Specifically, you learned:<\/p>\n<ul>\n<li>How the Transformer architecture implements an encoder-decoder structure without recurrence and convolutions.<span class=\"Apple-converted-space\">\u00a0<\/span>\n<\/li>\n<li>How the Transformer encoder and decoder work.<span class=\"Apple-converted-space\">\u00a0<\/span>\n<\/li>\n<li>How the Transformer self-attention compares to recurrent and convolutional layers.<\/li>\n<\/ul>\n<p>Do you have any questions?<br \/>\nAsk your questions in the comments below and I will do my best to answer.<\/p>\n<p>The post <a rel=\"nofollow\" href=\"https:\/\/machinelearningmastery.com\/the-transformer-model\/\">The Transformer Model<\/a> appeared first on <a rel=\"nofollow\" href=\"https:\/\/machinelearningmastery.com\/\">Machine Learning Mastery<\/a>.<\/p>\n<\/div>\n<p><a href=\"https:\/\/machinelearningmastery.com\/the-transformer-model\/\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Stefania Cristina We have already familiarized ourselves with the concept of self-attention as implemented by the Transformer attention mechanism for neural machine translation. We [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2021\/11\/09\/the-transformer-model\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":5190,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/5189"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=5189"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/5189\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/5190"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=5189"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=5189"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=5189"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}