您好,欢迎光临本网站![请登录][注册会员]  
文件名称: Attention Is All You Need.pdf
  所属分类: 深度学习
  开发工具:
  文件大小: 2mb
  下载次数: 0
  上传时间: 2019-07-13
  提 供 者: weixin_********
 详细说明:谷歌提出的Transformer结构tput Probabilities Softmax Linear Add& Norm Feed orwa Add& Norm Add norm Multi-Head Attention Forward N Add Norm I Add norm Masked Multi-Head Multi-Head Attention Atention Encoding ① Encoding ut Output Embedding Embedding Inputs Ishifted right Figure 1: The Transformer -model architecture Decoder: The decoder is also composed of a stack of N= 6 identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers followed by layer normalization we also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position i can depend only on the known outputs at positions less than i 3.2 Attention An attention function can be described as mapping a query and a set of key-value pairs to an output where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key 3.2.1 Scaled dot-Product attention We call our particular attention"Scaled Dot-Product Attention" (Figure 2. The input consists of queries and keys of dimension dk, and values of dimension d,. We compute the dot products of the query with all keys, divide each by vdk, and apply a softmax function to obtain the weights on the values Scaled dot-product attention Multⅰ- Head attention MatMul Concat Max Scaled Dct-Product Attention cale MatMul Linear Linear Linear oK v K Figure 2: (left) Scaled Dot-Product Attention. (right) Multi-Head Attention consists of several attention layers running in parallel In practice, we compute the attention function on a set of queries simultaneously, packed together nto a matrix Q. The keys and values are also packed together into matrices K and v. We compute the matrix of outputs as Attention(Q,K,V)=softmax(QkT (1) The two most commonly used attention functions are additive attention [21, and dot-product(multi plicative)attention. Dot-product attention is identical to our algorithm, except for the scaling factor of. additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code While for small values of dk the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of dk: 3 We suspect that for large values of dk, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients 4 to counteract this effect, we scale the dot products by 1 3.2.2 Multi-Head Attention Instead of performing a single attention function with dmodel-dimensional keys, values and queries we found it beneficial to linearly project the queries, keys and values h times with different, learned linear projections to dk, dk and du dimensions, respectively. On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding du-dimensional output values. These are concatenated and once again projected, resulting in the final values,as depicted in Figure 2 Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this To illustrate why the dot products get large, assume that the components of g and k are independent random variables with mean 0 and variance 1. Then their dot product, qk=2ikqiki, has mean0 and variance dre MultiHead(Q,K,v)=Concat(h heads)w where head;= Attention(QWG, Kwk, wW) Where the projections are parameter matrices wQ ∈ R amodel xak,Wk∈ iR amodel Xak,W↓∈ R madel x a and WO∈Rhdn×dale In this work we employ h=& parallel attention layers, or heads. For each of these we use h= 64. Due to the reduced dimension of each head the total computational cost Is similar to that of single-head attention with full dimensionality 3.2.3 Applications of Attention in our Model The Transformer uses multi-head attention in three different ways In"encoder-decoder attention"layers, the queries come from the previous decoder layer and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as 3629 The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. we implement this inside of scaled dot-product attention by masking out (setting to -o)all values in the input of the softmax which correspond to illegal connections. See Figurel 3. 3 Position-wise Feed-Forward Networks In addition to attention sub-layers, each of the layers in our encoder and decoder contains a full connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a relu activation in between FFN(=max(0, cW1+b1)W2+ b2 (2) While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1 The dimensionality of input and output is dmodel=512, and the inner-layer has dimensionality dr-2048. In the appendix we describe how the position-wise feed-forward network can also be seen as a form of attention 3.4 Embeddings and Softmax Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension dmodel. We also use the usual learned linear transfor mation and softmax function to convert the decoder output to predicted next-token probabilities In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to[29 In the embedding layers, we multiply those weights by dmo 3.5 Positional encoding Since our model contains no recurrence and no convolution in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the Table 1: Maximum path lengths, per-layer complexity and minimum number of sequential operations for different layer types. n is the sequence length, d is the representation dimension, k is the kernel size of convolutions and r the size of the neighborhood in restricted self-attention Layer lype Complexity per Layer Sequential Maximun Path Length Operations Self-Attention O(n< d O(1) O Recurrent O(n d Convolutional O(k·n·d2) O(1 O(logk(n)) Self-Attention (restricted) O(r·7·d O(1) O(/r) tokens in the sequence. To this end, we add"positional encodings"to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension dmodel as the embeddings, so that the two can be summed. There are many choices of positional encodings, learned and fixed [91 In this work, we use sine and cosine functions of different frequencies PE (pos, 2i) sin(pos/100002i/ dmodel PE(pos,2i+1)cos(pos/100002i/dmodel where pos is the position and i is the dimension. That is, cach dimension of the positional encoding corresponds to a sinusoid The wavelengths form a geometric progression from 2 to 10000.2T. we chose this function because we hypothesized it would allow the model to easily learn to attend b relative positions, since for any fixed offset ki, PEpos+k can be represented as a linear function of PE pos We also experimented with using learned positional embeddings 9 instead, and found that the two versions produced nearly identical results(see Table3 row(E)). We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training 4 Why self-Attention In this section we compare various aspects of self-attention layers to the recurrent and convolu tional layers commonly used for mapping one variable-length sequence of symbol representations (31,,n) to another sequence of equal length(a1, .,2n), with i, Mi E IR, such as a hidden layer in a typical sequence transduction encoder or decoder. Motivating our use of self-attention we consider three desiderata One is the total computational complexity per layer. Another is the amount of computation that can be parallelized, as measured by the minimum number of sequential operations required The third is the path length between long-range dependencies in the network. Learning long-range dependencies is a key challenge in many sequence transduction tasks. One key factor affecting the ability to learn such dependencies is the length of the paths forward and backward signals have to traverse in the network. The shorter these paths between any combination of positions in the input and output sequences, the easier it is to learn long-range dependencies [12]. Hence we also compare the maximum path length between any two input and output positions in networks composed of the different layer types. As noted in Table l a self-attention layer connects all positions with a constant number of sequentially executed operations, whereas a recurrent layer requires O(n) sequential operations. In terms of computational complexity, self-attention layers are faster than recurrent layers when the sequence length n is smaller than the representation dimensionality d, which is most often the case with sentence representations used by state-of-the-art models in machine translations, such as word-piece 36] and byte-pair [30] representations. To improve computational performance for tasks involving very long sequences, self-attention could be restricted to considering only a neighborhood of size r in the input sequence centered around the respective output position. This would increase the maximum path length to O(n/r). We plan to investigate this approach further in future work A single convolutional layer with kernel width k
(系统自动生成,下载前可以参看下载内容)

下载文件列表

相关说明

  • 本站资源为会员上传分享交流与学习,如有侵犯您的权益,请联系我们删除.
  • 本站是交换下载平台,提供交流渠道,下载内容来自于网络,除下载问题外,其它问题请自行百度
  • 本站已设置防盗链,请勿用迅雷、QQ旋风等多线程下载软件下载资源,下载后用WinRAR最新版进行解压.
  • 如果您发现内容无法下载,请稍后再次尝试;或者到消费记录里找到下载记录反馈给我们.
  • 下载后发现下载的内容跟说明不相乎,请到消费记录里找到下载记录反馈给我们,经确认后退回积分.
  • 如下载前有疑问,可以通过点击"提供者"的名字,查看对方的联系方式,联系对方咨询.
 相关搜索: AttentionIsAllYouNeed.pdf
 输入关键字,在本站1000多万海量源码库中尽情搜索: